I think realizing that different people have different capacities for impact is importantly true. I also think it’s important and true to note that the EA community is less well set up to accommodate many people than other communities. I think what I said is also more kind to say, in the long run, compared to casual reassurances that makes it harder for people to understand what’s going on. I think most of the other comments do not come from an accurate model of what’s most kind to Olivia (and onlookers) in the long run.
I think it is hard to grow fast and stay nuanced but I personally am optimistic about ending up as a large community in the long-run (not next year, but maybe next decade) and I think we can sow seeds that help with that (eg. by maybe making people feel glad that they interacted with the community even if they do end up deciding that they can, at least for now, find more joy and fulfillment elsewhere).
Good question! I’m pretty uncertain about the ideal growth rate and eventual size of “the EA community”, in my mind this among the more important unresolved strategic questions (though I suspect it’ll only become significantly action-relevant in a few years).
In any case, by expressing my agreement with Linch, I didn’t mean to rule out the possibility that in the future it may be easier for a wider range of people to have a good time interacting with the EA community. And I agree that in the meantime “making people feel glad that they interacted with the community even if they do end up deciding that they can, at least for now, find more joy and fulfillment elsewhere” is (in some cases) the right goal.
Yeah, I’ve noticed that this is a big conversation right now.
My personal take
EA ideas are nuanced and ideas do/should move quickly as the world changes and our information about it changes too. It is hard to move quickly with a very large group of people.
However, the core bit of effective altruism, something like “help others as much as we can and change our minds when we’re given a good reason to”, does seem like an idea that has room for a much wider ecosystem than we have.
I’m personally hopeful we’ll get better at striking a balance.
I think it might be possible to both have a small group that is highly connected and dedicated (who maybe can move quickly) whilst also having more much adjacent people and groups that feel part of our wider team.
Multiple groups co-existing means we can broadly be more inclusive, with communities that accommodate a very wide range of caring and curious people, where everyone who cares about the effective altruism project can feel they belong and can add value.
At the same time, we can maybe still get the advantages of a smaller group, because smaller groups still exist too.
More elaboration (because I overthink everything 🤣)
Organisations like GWWC do wonders for creating a version of effective altruism that is more accessible that is distinct from the vibe of, say, the academic field of “global priorities research”.
I think it is probably worth it on the margin to invest a little more effort into the people that are sympathetic to the core effective altruism idea, but maybe might, for whatever reason, not find a full sense of meaning and belonging within the smaller group of people who are more intense and more weird.
I also think it might be helpful to put a tonne of thought into what community builders are supposed to be optimizing for. Exactly what that thing is, I’m not sure, but I feel like it hasn’t quite been nailed just yet and lots of people are trying to move us closer to this from different sides.
Some people seem to be pushing for things like less jargon and more inclusivity. Others are pointing out that there is a trade-off here because we do want some people to be thinking outside the Overton Window. The community also seems quite capacity constrained and high-fidelity communication takes so much time and effort.
If we’re trying to talk to 20 people for one hour, we’re not spending 20 hours talking to just one incredibly curious person who has plenty of reasonable objections and, therefore, need someone, or several people, to explore the various nuances with them (like people did with me, possibly mistakenly 😛, when I first became interested in effective altruism and I’m so incredibly grateful they did). If we’re spending 20 hours having in-depth conversations with one person, that means we’re not having in-depth conversations with someone else. These trade-offs sadly exist whether or not we are consciously aware of them.
I think there are some things we can do that are big wins at low cost though, like just being nice to anyone who is curious about this “effective altruism” thing (even if we don’t spend 20 hours with everyone, we can usually spend 5 minutes just saying hello and making people who care feel welcome and that them showing up is valued, because imo, it should definitely be valued!).
Personally, I hope there will be more groups that are about effective altruism ideas where more people can feel like they truly belong. These wider groups would maybe be a little bit distinct from the smaller group(s) of people who are willing to be really weird and move really fast and give up everything for the effective altruism project. However, maybe everyone, despite having their own little sub-communities, still sees each other as wider allies without needing to be under one single banner.
Basically, I feel like the core thrust of effective altruism (helping others more effectively using reason and evidence to form views) could fit a lot more people. I feel like it’s good to have more tightly knit groups who have a more specific purpose (like trying to push the frontiers of doing as much good as possible in possibly less legible ways to a large audience).
I am hopeful these two types of communities can co-exist. I personally suspect that finding ways for these two groups of people to cooperate and feel like they are on the same team could be quite good for helping us achieve our common goal of helping others better (and I think posts like this one and its response do wonders for all sorts of different people to remind us we are, in fact, all in it together, and that we can find little pockets for everyone who cares deeply to help us all help others more).
There are also limited positions in organisations as well as limited capacity of senior people to train up junior people but, again, I’m optimistic that 1) this won’t be so permanent and 2) we can work out how to better make sure the people who care deeply about effective altruism who have careers outside effective altruism organisations also feel like valued members of the community.
FWIW I strongly agree with this.
Will we permanently have low capacity?
I think it is hard to grow fast and stay nuanced but I personally am optimistic about ending up as a large community in the long-run (not next year, but maybe next decade) and I think we can sow seeds that help with that (eg. by maybe making people feel glad that they interacted with the community even if they do end up deciding that they can, at least for now, find more joy and fulfillment elsewhere).
Good question! I’m pretty uncertain about the ideal growth rate and eventual size of “the EA community”, in my mind this among the more important unresolved strategic questions (though I suspect it’ll only become significantly action-relevant in a few years).
In any case, by expressing my agreement with Linch, I didn’t mean to rule out the possibility that in the future it may be easier for a wider range of people to have a good time interacting with the EA community. And I agree that in the meantime “making people feel glad that they interacted with the community even if they do end up deciding that they can, at least for now, find more joy and fulfillment elsewhere” is (in some cases) the right goal.
Thanks 😊.
Yeah, I’ve noticed that this is a big conversation right now.
My personal take
EA ideas are nuanced and ideas do/should move quickly as the world changes and our information about it changes too. It is hard to move quickly with a very large group of people.
However, the core bit of effective altruism, something like “help others as much as we can and change our minds when we’re given a good reason to”, does seem like an idea that has room for a much wider ecosystem than we have.
I’m personally hopeful we’ll get better at striking a balance.
I think it might be possible to both have a small group that is highly connected and dedicated (who maybe can move quickly) whilst also having more much adjacent people and groups that feel part of our wider team.
Multiple groups co-existing means we can broadly be more inclusive, with communities that accommodate a very wide range of caring and curious people, where everyone who cares about the effective altruism project can feel they belong and can add value.
At the same time, we can maybe still get the advantages of a smaller group, because smaller groups still exist too.
More elaboration (because I overthink everything 🤣)
Organisations like GWWC do wonders for creating a version of effective altruism that is more accessible that is distinct from the vibe of, say, the academic field of “global priorities research”.
I think it is probably worth it on the margin to invest a little more effort into the people that are sympathetic to the core effective altruism idea, but maybe might, for whatever reason, not find a full sense of meaning and belonging within the smaller group of people who are more intense and more weird.
I also think it might be helpful to put a tonne of thought into what community builders are supposed to be optimizing for. Exactly what that thing is, I’m not sure, but I feel like it hasn’t quite been nailed just yet and lots of people are trying to move us closer to this from different sides.
Some people seem to be pushing for things like less jargon and more inclusivity. Others are pointing out that there is a trade-off here because we do want some people to be thinking outside the Overton Window. The community also seems quite capacity constrained and high-fidelity communication takes so much time and effort.
If we’re trying to talk to 20 people for one hour, we’re not spending 20 hours talking to just one incredibly curious person who has plenty of reasonable objections and, therefore, need someone, or several people, to explore the various nuances with them (like people did with me, possibly mistakenly 😛, when I first became interested in effective altruism and I’m so incredibly grateful they did). If we’re spending 20 hours having in-depth conversations with one person, that means we’re not having in-depth conversations with someone else. These trade-offs sadly exist whether or not we are consciously aware of them.
I think there are some things we can do that are big wins at low cost though, like just being nice to anyone who is curious about this “effective altruism” thing (even if we don’t spend 20 hours with everyone, we can usually spend 5 minutes just saying hello and making people who care feel welcome and that them showing up is valued, because imo, it should definitely be valued!).
Personally, I hope there will be more groups that are about effective altruism ideas where more people can feel like they truly belong. These wider groups would maybe be a little bit distinct from the smaller group(s) of people who are willing to be really weird and move really fast and give up everything for the effective altruism project. However, maybe everyone, despite having their own little sub-communities, still sees each other as wider allies without needing to be under one single banner.
Basically, I feel like the core thrust of effective altruism (helping others more effectively using reason and evidence to form views) could fit a lot more people. I feel like it’s good to have more tightly knit groups who have a more specific purpose (like trying to push the frontiers of doing as much good as possible in possibly less legible ways to a large audience).
I am hopeful these two types of communities can co-exist. I personally suspect that finding ways for these two groups of people to cooperate and feel like they are on the same team could be quite good for helping us achieve our common goal of helping others better (and I think posts like this one and its response do wonders for all sorts of different people to remind us we are, in fact, all in it together, and that we can find little pockets for everyone who cares deeply to help us all help others more).
There are also limited positions in organisations as well as limited capacity of senior people to train up junior people but, again, I’m optimistic that 1) this won’t be so permanent and 2) we can work out how to better make sure the people who care deeply about effective altruism who have careers outside effective altruism organisations also feel like valued members of the community.