I think EA is unlikely to be able to attract >1% of the (Western and non-Western) population primarily because understanding EA ideas (and being into them) typically requires a scientific and prosocial/altruistic mindset, advanced education, and the right age (no younger than ~16, not old enough to be too busy with lots of other life goals). Trying to attract >1% of the population would in my view likely lead to a harmful dilution of the EA community.
Thanks for stating your view on this as I would guess this will be a crux for some.
FWIW, I’m not sure if I agree with this. I certainly agree that there is a real risk from ‘dilution’ and other risks from both too rapid growth and a too large total community size.
However, I’m most concerned about these risks if I imagine a community that’s kind of “one big blob” without much structure. But that’s not the only strategy on the table. There could also be a strategy where the total community is quite large but there is structure and diversity within the community regarding what exactly ‘being an EA’ means for people, who interacts with whom, who commands how many resources, etc.
I feel like many other professional, academic, or political communities are both quite large overall and, at least to some extent, maintain spaces that aren’t harmed by “dilution”. Perhaps most notably, consider that almost any academic discipline is huge and yet there is formal and informal structure that to some extent separates the wheat from the chaff. There is the majority of people who drops out of academia after their PhDs and the tiny majority of those who become a professor; there is the majority of papers that will never be cited or are of poor quality, and then there is the very few number of top journals; there is the majority of colleges and university where faculty is mostly busy teaching and from where we don’t expect much innovation, and the tiny fraction of research-focused top universities, etc.
I’m not saying this is clearly the way to go, or even feasible at all, for EA. But I do feel quite strongly that “we need to protect spaces for really high-quality interactions and intellectual progress” or similar—even if we buy them as assumption—does not imply it’s best to keep to total size of the community small.
Perhaps as an intuition pump, consider how the life of Ramanujan might have looked like if there hadn’t existed a maths book accessible to people in his situation, a “non-elite” university and other education accessible to someone in his situation, etc.
Yeah, these are great points. I agree that with enough structure, larger-scale growth seems possible. Basically, I agree with everything you said. I’d perhaps add that in such a world, “EA” would have a quite different meaning from how we use the term now. I also don’t quite buy the point about Ramanujan – I think “spreading the ideas widely” is different from “making the community huge”.
(Small meta nitpick: I find it confusing to call a community of 2 million people “small” – really wish we were using “very large” for 2 million and “insanely huge” for 1% of the population, or similar. Like, if someone said “Jonas wants to keep EA small”, I would feel like they were misrepresenting my opinion.)
I think “spreading the ideas widely” is different from “making the community huge”
Yeah, I think that’s an important insight I also agree with.
In an ideal world the best thing to do would be to expose everyone to some kind of “screening device” (e.g. a pitch or piece of content with a call to action at the end) which draws them into the EA community if and only if they’d make a net valuable contribution. In the actual world there is no such screening device, but I suspect we could still do more to expand the reach of “exposure to the initial ideas / basic framework of EA” while relying on self-selection and existing gatekeeping mechanisms for reducing the risk of dilution etc.
My main concern with such a strategy would actually not be that it risks dilution but that it would be more valuable once we have more of a “task Y”, i.e. something a lot of people can do. (Or some other change that would allow us to better utilize more talent.)
Thanks for stating your view on this as I would guess this will be a crux for some.
FWIW, I’m not sure if I agree with this. I certainly agree that there is a real risk from ‘dilution’ and other risks from both too rapid growth and a too large total community size.
However, I’m most concerned about these risks if I imagine a community that’s kind of “one big blob” without much structure. But that’s not the only strategy on the table. There could also be a strategy where the total community is quite large but there is structure and diversity within the community regarding what exactly ‘being an EA’ means for people, who interacts with whom, who commands how many resources, etc.
I feel like many other professional, academic, or political communities are both quite large overall and, at least to some extent, maintain spaces that aren’t harmed by “dilution”. Perhaps most notably, consider that almost any academic discipline is huge and yet there is formal and informal structure that to some extent separates the wheat from the chaff. There is the majority of people who drops out of academia after their PhDs and the tiny majority of those who become a professor; there is the majority of papers that will never be cited or are of poor quality, and then there is the very few number of top journals; there is the majority of colleges and university where faculty is mostly busy teaching and from where we don’t expect much innovation, and the tiny fraction of research-focused top universities, etc.
I’m not saying this is clearly the way to go, or even feasible at all, for EA. But I do feel quite strongly that “we need to protect spaces for really high-quality interactions and intellectual progress” or similar—even if we buy them as assumption—does not imply it’s best to keep to total size of the community small.
Perhaps as an intuition pump, consider how the life of Ramanujan might have looked like if there hadn’t existed a maths book accessible to people in his situation, a “non-elite” university and other education accessible to someone in his situation, etc.
Yeah, these are great points. I agree that with enough structure, larger-scale growth seems possible. Basically, I agree with everything you said. I’d perhaps add that in such a world, “EA” would have a quite different meaning from how we use the term now. I also don’t quite buy the point about Ramanujan – I think “spreading the ideas widely” is different from “making the community huge”.
(Small meta nitpick: I find it confusing to call a community of 2 million people “small” – really wish we were using “very large” for 2 million and “insanely huge” for 1% of the population, or similar. Like, if someone said “Jonas wants to keep EA small”, I would feel like they were misrepresenting my opinion.)
Yeah, I think that’s an important insight I also agree with.
In an ideal world the best thing to do would be to expose everyone to some kind of “screening device” (e.g. a pitch or piece of content with a call to action at the end) which draws them into the EA community if and only if they’d make a net valuable contribution. In the actual world there is no such screening device, but I suspect we could still do more to expand the reach of “exposure to the initial ideas / basic framework of EA” while relying on self-selection and existing gatekeeping mechanisms for reducing the risk of dilution etc.
My main concern with such a strategy would actually not be that it risks dilution but that it would be more valuable once we have more of a “task Y”, i.e. something a lot of people can do. (Or some other change that would allow us to better utilize more talent.)