I’m not a member of the EA community, and in fact have been quite sceptical of it, but I do believe in the idea of altruism being effective, so I wanted to engage on a post like this. For context, I work on public health in developing countries, and have worked in a variety of fields in the traditional aid sector from agriculture to women’s rights to civil society – in my observation public health is the most effective, followed by agriculture. While I’m sceptical of EA as a community, I do believe in some of the tenets, and even use services like GiveWell to guide my own donations. I wanted to ask the some questions, if you or community members are willing to answer – bear with me as I don’t know the EA jargon that well.
One of the main reasons I’m sceptical is sort of a generalised scepticism of cliques, identities, and subcultures in general. Human beings are social animals, and we naturally seek status. So when a community/subculture forms, suddenly people seek status in it, seek to associate with its ‘leaders’ or popular causes, this short circuits the ostensibly rational analysis people think they’re doing. Of course we don’t know we’re doing this, we think we’re being perfectly rational, but a lot of what I hear coming out of the EA community seems to follow this dynamic—longtermism, crypto, FTX and its supporters all feel typical of social dynamics in other communities, and I just don’t think there’s any way for human beings to get around this. Does this make the idea of an EA ‘community’ self-defeating?
Similarly, the other aspect I don’t see EA (as it filters to the outside) dealing with well is humility, specifically humility and one’s own motivations and humanity. Knowing we’re susceptible to all sorts of faults, we should acknowledge these when we make plans. When we observe thousands of people saying ‘I’ll do good things when I’m rich/powerful’, and then getting caught up in their own world, it seems absurd to say ‘aha, but I’ll be different!’ We have to assume that some of our motivations aren’t entirely pure or rational. EA seems to think we can put that aside with enough technical terminology and get to pure reason, but I just don’t see it happening. Once it becomes a human social structure, how can humility remain a part of an EA community? Human communities just don’t tend to work that way—and the individuals who DO see subtleties tend to lose status in closed communities.
My own view is that things that work in the world are rare, so when you find one you need to do what you can to replicate or widen it. You also need to double-check it constantly – for my work I insist on constantly interacting at the clinic level to see if what I’m building is actually used and useful, and change it if it isn’t. I want to fully acknowledge the massive pathologies of the formal aid sector, but I work to mitigate those in the course of my job. I haven’t, to be honest, seen anything from the EA community that would help me with that other than an articulation of fairly obvious general principles. So what is it all for?
Does this make the idea of an EA ‘community’ self-defeating?
I think that it’s possible to have a community that minimizes the distortion of these social dynamics, or at least is capable of doing significant good despite the distortions—but as I argued in the post, at scale this is far harder, and I think it might be net negative to try to build a single global community the way EA seems to have decided to do.
My own view is that things that work in the world are rare, so when you find one you need to do what you can to replicate or widen it.
Agreed—and that was one of the key things early EA emphasized—and it’s been accepted, in small part due to EA, to the point that it is now conventional wisdom.
I want to fully acknowledge the massive pathologies of the formal aid sector, but I work to mitigate those in the course of my job. I haven’t, to be honest, seen anything from the EA community that would help me with that other than an articulation of fairly obvious general principles.
I don’t think that EA as a movement is well placed to provide ways to reform traditional aid. As you point out, it has many pathologies, and I don’t think there is a simple answer to fix a complex system deeply embedded in geopolitics and social reality. I do think that EA-promoted ideas, including giving directly, have the potential to displace some of the broken systems, and we should work towards figuring out where simpler systems can replace current complex but broken ones. I also think that an EA-like focus on measuring outcomes helps push for the parts of traditional aid that do work. That is, it identifies specific programs which are effective and evaluates and scales them. This isn’t to say that traditional aid doesn’t have related efforts which are also successful, but I think overall it’s helpful to have external pushes from EA and people who embrace related approaches for this work.
Thanks for this response! I don’t want to go too deep on the traditional aid sector, being still in it and all, but I do think they could do with a lot more thinking about real effectiveness. Or even just to occasionally step back, and think ‘what are we trying to do here?’ I don’t disagree with anything you’ve written, except to wonder if it’s even possible to have non-global communities anymore, and if even small scale communities succumb to the same dynamics.
Just when an idea becomes popular it becomes a community, and the community imports the social dynamics. I suppose if I had to say what I would envision for something EA-like is in line with a what you said about conventional wisdom – a kind of invisible force that no one really identifies with, but nudges decisions in a better direction. To some extent I wonder if game theory and microeconomics have maybe achieved this – people seem to subconsciously think a lot more in terms of cost/benefit than they did 20 years ago. But whenever an online community becomes a ‘thing’, I really feel like those social communities overwhelm – and my experience living in Berlin suggests to me that even the tiniest subcultures develop the same dynamics.
Speaking of geopolitics and social reality, do you think EA grapples with that well? In my experience one of the most crucial elements of effectiveness for aid projects has been good buy-in from the local government and community, which can be a messy, political and extremely tedious process, and I’m lucky enough to have an employer that takes the time. What’s the EA opinion on ‘do something suboptimal because otherwise one department of a ministry will hate you and your whole project is screwed’?
I’m not a member of the EA community, and in fact have been quite sceptical of it, but I do believe in the idea of altruism being effective, so I wanted to engage on a post like this. For context, I work on public health in developing countries, and have worked in a variety of fields in the traditional aid sector from agriculture to women’s rights to civil society – in my observation public health is the most effective, followed by agriculture. While I’m sceptical of EA as a community, I do believe in some of the tenets, and even use services like GiveWell to guide my own donations. I wanted to ask the some questions, if you or community members are willing to answer – bear with me as I don’t know the EA jargon that well.
One of the main reasons I’m sceptical is sort of a generalised scepticism of cliques, identities, and subcultures in general. Human beings are social animals, and we naturally seek status. So when a community/subculture forms, suddenly people seek status in it, seek to associate with its ‘leaders’ or popular causes, this short circuits the ostensibly rational analysis people think they’re doing. Of course we don’t know we’re doing this, we think we’re being perfectly rational, but a lot of what I hear coming out of the EA community seems to follow this dynamic—longtermism, crypto, FTX and its supporters all feel typical of social dynamics in other communities, and I just don’t think there’s any way for human beings to get around this. Does this make the idea of an EA ‘community’ self-defeating?
Similarly, the other aspect I don’t see EA (as it filters to the outside) dealing with well is humility, specifically humility and one’s own motivations and humanity. Knowing we’re susceptible to all sorts of faults, we should acknowledge these when we make plans. When we observe thousands of people saying ‘I’ll do good things when I’m rich/powerful’, and then getting caught up in their own world, it seems absurd to say ‘aha, but I’ll be different!’ We have to assume that some of our motivations aren’t entirely pure or rational. EA seems to think we can put that aside with enough technical terminology and get to pure reason, but I just don’t see it happening. Once it becomes a human social structure, how can humility remain a part of an EA community? Human communities just don’t tend to work that way—and the individuals who DO see subtleties tend to lose status in closed communities.
My own view is that things that work in the world are rare, so when you find one you need to do what you can to replicate or widen it. You also need to double-check it constantly – for my work I insist on constantly interacting at the clinic level to see if what I’m building is actually used and useful, and change it if it isn’t. I want to fully acknowledge the massive pathologies of the formal aid sector, but I work to mitigate those in the course of my job. I haven’t, to be honest, seen anything from the EA community that would help me with that other than an articulation of fairly obvious general principles. So what is it all for?
I think that it’s possible to have a community that minimizes the distortion of these social dynamics, or at least is capable of doing significant good despite the distortions—but as I argued in the post, at scale this is far harder, and I think it might be net negative to try to build a single global community the way EA seems to have decided to do.
Agreed—and that was one of the key things early EA emphasized—and it’s been accepted, in small part due to EA, to the point that it is now conventional wisdom.
I don’t think that EA as a movement is well placed to provide ways to reform traditional aid. As you point out, it has many pathologies, and I don’t think there is a simple answer to fix a complex system deeply embedded in geopolitics and social reality. I do think that EA-promoted ideas, including giving directly, have the potential to displace some of the broken systems, and we should work towards figuring out where simpler systems can replace current complex but broken ones. I also think that an EA-like focus on measuring outcomes helps push for the parts of traditional aid that do work. That is, it identifies specific programs which are effective and evaluates and scales them. This isn’t to say that traditional aid doesn’t have related efforts which are also successful, but I think overall it’s helpful to have external pushes from EA and people who embrace related approaches for this work.
Thanks for this response! I don’t want to go too deep on the traditional aid sector, being still in it and all, but I do think they could do with a lot more thinking about real effectiveness. Or even just to occasionally step back, and think ‘what are we trying to do here?’ I don’t disagree with anything you’ve written, except to wonder if it’s even possible to have non-global communities anymore, and if even small scale communities succumb to the same dynamics.
Just when an idea becomes popular it becomes a community, and the community imports the social dynamics. I suppose if I had to say what I would envision for something EA-like is in line with a what you said about conventional wisdom – a kind of invisible force that no one really identifies with, but nudges decisions in a better direction. To some extent I wonder if game theory and microeconomics have maybe achieved this – people seem to subconsciously think a lot more in terms of cost/benefit than they did 20 years ago. But whenever an online community becomes a ‘thing’, I really feel like those social communities overwhelm – and my experience living in Berlin suggests to me that even the tiniest subcultures develop the same dynamics.
Speaking of geopolitics and social reality, do you think EA grapples with that well? In my experience one of the most crucial elements of effectiveness for aid projects has been good buy-in from the local government and community, which can be a messy, political and extremely tedious process, and I’m lucky enough to have an employer that takes the time. What’s the EA opinion on ‘do something suboptimal because otherwise one department of a ministry will hate you and your whole project is screwed’?
Welcome! I’m a bit pushed for time but thought I’d offer up an answer to you question:
If I had to pick one USP of EA it’s the serious attempt to prioritise between causes.