This feels very important, and the concepts of the EA Network, EA as coordination and EA as an incubator should be standard even if this will not completely transform EA. Thanks for writing it so clearly.
I mainly want to suggest that this relates strongly to the discussion in the recent 80k podcast about sub-communities in EA. And mostly the conversation between Rob and Arden at the end.
Robert Wiblin: [...] And it makes me wonder like sometimes whether one of these groups should like use the term EA and the other group should maybe use something else?
Like perhaps the people who are focused on the long-term should mostly talk about themselves as long-termists, and then they can have the kind of the internal culture that makes sense given that focus.
Peter Singer: That’s a possibility. And that might help the other groups that you’re referring to make their views clear.
So that certainly could help. I do think that actually there’s benefits for the longtermists too in having a successful and broad EA movement. Because just as you know, I’ve seen this in the animal movement. I spoke earlier about how the animal welfare movement, when I first got into it was focused on cats and dogs and people who were attracted to that.
And I clearly criticized that, but at the same time, I have to recognize that there are people who come into the animal movement because of their concern for cats and dogs who later move on to understand that the number of farm animals suffering is vastly greater than the number of cats and dogs suffering and that typically the farm animals suffer more than the cats and dogs, and so they’ve added to the strength of the broader, and as I see more important, animal welfare organizations or animal rights organizations that are working for farm animals. So I think it’s possible that something similar can happen in the EA movement. That is that people can get attracted to EA through the idea of helping people in extreme poverty.
And then they’re part of a community that will hear arguments about long-termism. And maybe you’ll be able to recruit more talented people to do that research that needs to be done if there’s a broad and successful EA movement.
This part of the discussion really rang true to me, and I want to hear more serious discussion on this topic. To many people outside the community it’s not at all clear what AI research, animal welfare, and global poverty have in common. Whatever corner of the movement they encounter first will guide their perception of EA; this obviously affects their likelihood of participation and the chances of their giving to an effective cause.
We all mostly recognize that EA is a question and not an answer, but the question that ties these topics together itself requires substantial context and explanation for the uninitiated (people who are relatively unused to thinking in a certain way). In addition, entertaining counterintuitive notions is a central part of lots of EA discourse, but many people simply do not accept counterintuitive conclusions as a matter of habit and worldview.
The way the movement is structured now, I fear that large swaths of the population are basically excluded by these obstacles. I think we have a tendency to write these people off. But in the “network” sense, many of these people probably have a lot to contribute in the way of skills, money, and ideas. There’s a lot of value—real value of the kind we like to quantify when we think about big cause areas—lost in failing to include them.
I recognize that EA movement building is an accepted cause area. But I’d like to see our conception of that cause area broaden by a lot— even the EA label is enough to turn people off, and strategies for communication of the EA message to the wider world have severely lagged the professionalization of discourse within the “community.”
This feels very important, and the concepts of the EA Network, EA as coordination and EA as an incubator should be standard even if this will not completely transform EA. Thanks for writing it so clearly.
I mainly want to suggest that this relates strongly to the discussion in the recent 80k podcast about sub-communities in EA. And mostly the conversation between Rob and Arden at the end.
This part of the discussion really rang true to me, and I want to hear more serious discussion on this topic. To many people outside the community it’s not at all clear what AI research, animal welfare, and global poverty have in common. Whatever corner of the movement they encounter first will guide their perception of EA; this obviously affects their likelihood of participation and the chances of their giving to an effective cause.
We all mostly recognize that EA is a question and not an answer, but the question that ties these topics together itself requires substantial context and explanation for the uninitiated (people who are relatively unused to thinking in a certain way). In addition, entertaining counterintuitive notions is a central part of lots of EA discourse, but many people simply do not accept counterintuitive conclusions as a matter of habit and worldview.
The way the movement is structured now, I fear that large swaths of the population are basically excluded by these obstacles. I think we have a tendency to write these people off. But in the “network” sense, many of these people probably have a lot to contribute in the way of skills, money, and ideas. There’s a lot of value—real value of the kind we like to quantify when we think about big cause areas—lost in failing to include them.
I recognize that EA movement building is an accepted cause area. But I’d like to see our conception of that cause area broaden by a lot— even the EA label is enough to turn people off, and strategies for communication of the EA message to the wider world have severely lagged the professionalization of discourse within the “community.”