This feels like it misses an important point. On the margin, maybe less intelligent people will have on average less of an individual impact. But given that there are far more people of average intelligence than people on the right tail of the IQ curve, if EA could tune its pitches more to people of average intelligence, it could reach a far greater audience and thereby have a larger summed impact. Right?
I think there’s also a couple other assumptions in here that aren’t obviously true. For one, it assumes a very individualistic model of impact; but it seems possible that the most impactful social movements come out of large-scale collective action, which necessarily requires involvement from broader swaths of the population. Also, I think the driving ideas in EA are not that complicated, and could be written in equally-rigorous ways that don’t require being very smart to parse.
This comment upset me because I felt that Olivia’s post was important and vulnerable, and, if I were Olivia, I would feel pushed away by this comment. But I’m rereading your comment and thinking now that you had better intentions than what I felt? Idk, I’m keeping this in here because the initial gut reaction feels valuable to name.
Anyway, on a good day, I try to aim my internet comments on this Forum to be true, necessary, and kind. I don’t always succeed, but I try my best.
This comment upset me because I felt that Olivia’s post was important and vulnerable, and, if I were Olivia, I would feel pushed away by this comment. But I’m rereading your comment and thinking now that you had better intentions than what I felt? Idk, I’m keeping this in here because the initial gut reaction feels valuable to name.
I think realizing that different people have different capacities for impact is importantly true. I also think it’s important and true to note that the EA community is less well set up to accommodate many people than other communities. I think what I said is also more kind to say, in the long run, compared to casual reassurances that makes it harder for people to understand what’s going on. I think most of the other comments do not come from an accurate model of what’s most kind to Olivia (and onlookers) in the long run.
Is my comment necessary? I don’t know. In one sense it clearly isn’t (people can clearly go about their lives without reading what I said). But in another sense, I feel better about an EA community that is more honest to potential members with best guesses about what we are and what we try to do.
In terms of “pushed away”, I will be sad if Olivia (and others) read my comment and felt dissuaded about the project of doing good. I will be much less sad about some people reading my comment and it being one component in them correctly* deciding that this community is not for them. The EA community is not a good community for everyone, and that’s okay.
(Perhaps you think, as some of the other commentators seem to, that the EA community can do a ton more to be broadly accommodating,. This is certainly something that’s tractable to work on, e.g. we can emphasizing our role models to be more like people in Strangers Drowning rather than top researchers and entrepreneurs. But I’m not working on this, and chances are, neither are you).
*There is certainly a danger of being overly prone to saying “harsh truths”, such that people are incorrectly pushed away relative to a balanced portrayal. But I still stand behind what I said, especially in the context of trying to balance out the other comments that were in this post before I commented, notably before Lukas_Gloor’s comment.
I think realizing that different people have different capacities for impact is importantly true. I also think it’s important and true to note that the EA community is less well set up to accommodate many people than other communities. I think what I said is also more kind to say, in the long run, compared to casual reassurances that makes it harder for people to understand what’s going on. I think most of the other comments do not come from an accurate model of what’s most kind to Olivia (and onlookers) in the long run.
I think it is hard to grow fast and stay nuanced but I personally am optimistic about ending up as a large community in the long-run (not next year, but maybe next decade) and I think we can sow seeds that help with that (eg. by maybe making people feel glad that they interacted with the community even if they do end up deciding that they can, at least for now, find more joy and fulfillment elsewhere).
Good question! I’m pretty uncertain about the ideal growth rate and eventual size of “the EA community”, in my mind this among the more important unresolved strategic questions (though I suspect it’ll only become significantly action-relevant in a few years).
In any case, by expressing my agreement with Linch, I didn’t mean to rule out the possibility that in the future it may be easier for a wider range of people to have a good time interacting with the EA community. And I agree that in the meantime “making people feel glad that they interacted with the community even if they do end up deciding that they can, at least for now, find more joy and fulfillment elsewhere” is (in some cases) the right goal.
Yeah, I’ve noticed that this is a big conversation right now.
My personal take
EA ideas are nuanced and ideas do/should move quickly as the world changes and our information about it changes too. It is hard to move quickly with a very large group of people.
However, the core bit of effective altruism, something like “help others as much as we can and change our minds when we’re given a good reason to”, does seem like an idea that has room for a much wider ecosystem than we have.
I’m personally hopeful we’ll get better at striking a balance.
I think it might be possible to both have a small group that is highly connected and dedicated (who maybe can move quickly) whilst also having more much adjacent people and groups that feel part of our wider team.
Multiple groups co-existing means we can broadly be more inclusive, with communities that accommodate a very wide range of caring and curious people, where everyone who cares about the effective altruism project can feel they belong and can add value.
At the same time, we can maybe still get the advantages of a smaller group, because smaller groups still exist too.
More elaboration (because I overthink everything 🤣)
Organisations like GWWC do wonders for creating a version of effective altruism that is more accessible that is distinct from the vibe of, say, the academic field of “global priorities research”.
I think it is probably worth it on the margin to invest a little more effort into the people that are sympathetic to the core effective altruism idea, but maybe might, for whatever reason, not find a full sense of meaning and belonging within the smaller group of people who are more intense and more weird.
I also think it might be helpful to put a tonne of thought into what community builders are supposed to be optimizing for. Exactly what that thing is, I’m not sure, but I feel like it hasn’t quite been nailed just yet and lots of people are trying to move us closer to this from different sides.
Some people seem to be pushing for things like less jargon and more inclusivity. Others are pointing out that there is a trade-off here because we do want some people to be thinking outside the Overton Window. The community also seems quite capacity constrained and high-fidelity communication takes so much time and effort.
If we’re trying to talk to 20 people for one hour, we’re not spending 20 hours talking to just one incredibly curious person who has plenty of reasonable objections and, therefore, need someone, or several people, to explore the various nuances with them (like people did with me, possibly mistakenly 😛, when I first became interested in effective altruism and I’m so incredibly grateful they did). If we’re spending 20 hours having in-depth conversations with one person, that means we’re not having in-depth conversations with someone else. These trade-offs sadly exist whether or not we are consciously aware of them.
I think there are some things we can do that are big wins at low cost though, like just being nice to anyone who is curious about this “effective altruism” thing (even if we don’t spend 20 hours with everyone, we can usually spend 5 minutes just saying hello and making people who care feel welcome and that them showing up is valued, because imo, it should definitely be valued!).
Personally, I hope there will be more groups that are about effective altruism ideas where more people can feel like they truly belong. These wider groups would maybe be a little bit distinct from the smaller group(s) of people who are willing to be really weird and move really fast and give up everything for the effective altruism project. However, maybe everyone, despite having their own little sub-communities, still sees each other as wider allies without needing to be under one single banner.
Basically, I feel like the core thrust of effective altruism (helping others more effectively using reason and evidence to form views) could fit a lot more people. I feel like it’s good to have more tightly knit groups who have a more specific purpose (like trying to push the frontiers of doing as much good as possible in possibly less legible ways to a large audience).
I am hopeful these two types of communities can co-exist. I personally suspect that finding ways for these two groups of people to cooperate and feel like they are on the same team could be quite good for helping us achieve our common goal of helping others better (and I think posts like this one and its response do wonders for all sorts of different people to remind us we are, in fact, all in it together, and that we can find little pockets for everyone who cares deeply to help us all help others more).
There are also limited positions in organisations as well as limited capacity of senior people to train up junior people but, again, I’m optimistic that 1) this won’t be so permanent and 2) we can work out how to better make sure the people who care deeply about effective altruism who have careers outside effective altruism organisations also feel like valued members of the community.
This feels like it misses an important point. On the margin, maybe less intelligent people will have on average less of an individual impact. But given that there are far more people of average intelligence than people on the right tail of the IQ curve, if EA could tune its pitches more to people of average intelligence, it could reach a far greater audience and thereby have a larger summed impact. Right?
I think there’s also a couple other assumptions in here that aren’t obviously true. For one, it assumes a very individualistic model of impact; but it seems possible that the most impactful social movements come out of large-scale collective action, which necessarily requires involvement from broader swaths of the population. Also, I think the driving ideas in EA are not that complicated, and could be written in equally-rigorous ways that don’t require being very smart to parse.
This comment upset me because I felt that Olivia’s post was important and vulnerable, and, if I were Olivia, I would feel pushed away by this comment. But I’m rereading your comment and thinking now that you had better intentions than what I felt? Idk, I’m keeping this in here because the initial gut reaction feels valuable to name.
Thanks I appreciate this feedback.
Anyway, on a good day, I try to aim my internet comments on this Forum to be true, necessary, and kind. I don’t always succeed, but I try my best.
I think realizing that different people have different capacities for impact is importantly true. I also think it’s important and true to note that the EA community is less well set up to accommodate many people than other communities. I think what I said is also more kind to say, in the long run, compared to casual reassurances that makes it harder for people to understand what’s going on. I think most of the other comments do not come from an accurate model of what’s most kind to Olivia (and onlookers) in the long run.
Is my comment necessary? I don’t know. In one sense it clearly isn’t (people can clearly go about their lives without reading what I said). But in another sense, I feel better about an EA community that is more honest to potential members with best guesses about what we are and what we try to do.
In terms of “pushed away”, I will be sad if Olivia (and others) read my comment and felt dissuaded about the project of doing good. I will be much less sad about some people reading my comment and it being one component in them correctly* deciding that this community is not for them. The EA community is not a good community for everyone, and that’s okay.
(Perhaps you think, as some of the other commentators seem to, that the EA community can do a ton more to be broadly accommodating,. This is certainly something that’s tractable to work on, e.g. we can emphasizing our role models to be more like people in Strangers Drowning rather than top researchers and entrepreneurs. But I’m not working on this, and chances are, neither are you).
*There is certainly a danger of being overly prone to saying “harsh truths”, such that people are incorrectly pushed away relative to a balanced portrayal. But I still stand behind what I said, especially in the context of trying to balance out the other comments that were in this post before I commented, notably before Lukas_Gloor’s comment.
FWIW I strongly agree with this.
Will we permanently have low capacity?
I think it is hard to grow fast and stay nuanced but I personally am optimistic about ending up as a large community in the long-run (not next year, but maybe next decade) and I think we can sow seeds that help with that (eg. by maybe making people feel glad that they interacted with the community even if they do end up deciding that they can, at least for now, find more joy and fulfillment elsewhere).
Good question! I’m pretty uncertain about the ideal growth rate and eventual size of “the EA community”, in my mind this among the more important unresolved strategic questions (though I suspect it’ll only become significantly action-relevant in a few years).
In any case, by expressing my agreement with Linch, I didn’t mean to rule out the possibility that in the future it may be easier for a wider range of people to have a good time interacting with the EA community. And I agree that in the meantime “making people feel glad that they interacted with the community even if they do end up deciding that they can, at least for now, find more joy and fulfillment elsewhere” is (in some cases) the right goal.
Thanks 😊.
Yeah, I’ve noticed that this is a big conversation right now.
My personal take
EA ideas are nuanced and ideas do/should move quickly as the world changes and our information about it changes too. It is hard to move quickly with a very large group of people.
However, the core bit of effective altruism, something like “help others as much as we can and change our minds when we’re given a good reason to”, does seem like an idea that has room for a much wider ecosystem than we have.
I’m personally hopeful we’ll get better at striking a balance.
I think it might be possible to both have a small group that is highly connected and dedicated (who maybe can move quickly) whilst also having more much adjacent people and groups that feel part of our wider team.
Multiple groups co-existing means we can broadly be more inclusive, with communities that accommodate a very wide range of caring and curious people, where everyone who cares about the effective altruism project can feel they belong and can add value.
At the same time, we can maybe still get the advantages of a smaller group, because smaller groups still exist too.
More elaboration (because I overthink everything 🤣)
Organisations like GWWC do wonders for creating a version of effective altruism that is more accessible that is distinct from the vibe of, say, the academic field of “global priorities research”.
I think it is probably worth it on the margin to invest a little more effort into the people that are sympathetic to the core effective altruism idea, but maybe might, for whatever reason, not find a full sense of meaning and belonging within the smaller group of people who are more intense and more weird.
I also think it might be helpful to put a tonne of thought into what community builders are supposed to be optimizing for. Exactly what that thing is, I’m not sure, but I feel like it hasn’t quite been nailed just yet and lots of people are trying to move us closer to this from different sides.
Some people seem to be pushing for things like less jargon and more inclusivity. Others are pointing out that there is a trade-off here because we do want some people to be thinking outside the Overton Window. The community also seems quite capacity constrained and high-fidelity communication takes so much time and effort.
If we’re trying to talk to 20 people for one hour, we’re not spending 20 hours talking to just one incredibly curious person who has plenty of reasonable objections and, therefore, need someone, or several people, to explore the various nuances with them (like people did with me, possibly mistakenly 😛, when I first became interested in effective altruism and I’m so incredibly grateful they did). If we’re spending 20 hours having in-depth conversations with one person, that means we’re not having in-depth conversations with someone else. These trade-offs sadly exist whether or not we are consciously aware of them.
I think there are some things we can do that are big wins at low cost though, like just being nice to anyone who is curious about this “effective altruism” thing (even if we don’t spend 20 hours with everyone, we can usually spend 5 minutes just saying hello and making people who care feel welcome and that them showing up is valued, because imo, it should definitely be valued!).
Personally, I hope there will be more groups that are about effective altruism ideas where more people can feel like they truly belong. These wider groups would maybe be a little bit distinct from the smaller group(s) of people who are willing to be really weird and move really fast and give up everything for the effective altruism project. However, maybe everyone, despite having their own little sub-communities, still sees each other as wider allies without needing to be under one single banner.
Basically, I feel like the core thrust of effective altruism (helping others more effectively using reason and evidence to form views) could fit a lot more people. I feel like it’s good to have more tightly knit groups who have a more specific purpose (like trying to push the frontiers of doing as much good as possible in possibly less legible ways to a large audience).
I am hopeful these two types of communities can co-exist. I personally suspect that finding ways for these two groups of people to cooperate and feel like they are on the same team could be quite good for helping us achieve our common goal of helping others better (and I think posts like this one and its response do wonders for all sorts of different people to remind us we are, in fact, all in it together, and that we can find little pockets for everyone who cares deeply to help us all help others more).
There are also limited positions in organisations as well as limited capacity of senior people to train up junior people but, again, I’m optimistic that 1) this won’t be so permanent and 2) we can work out how to better make sure the people who care deeply about effective altruism who have careers outside effective altruism organisations also feel like valued members of the community.