It seems like there is a quality and quantity trade-off where you could grow EA faster by expecting less engagement or commitment. I think there’s a lot of value in thinking about how to make EA massively scale. For example, if we wanted to grow EA to millions of people maybe we could lower the barrier to entry somehow by having a small number of core ideas or advertising low-commitment actions such as earning to give. I think scaling up the number of people massively would benefit the most scalable charities such as GiveDirectly.
I suppose this mostly has to do with growing the size of the “EA community”, whereas I’m mostly thinking about growing the size of “people doing effectively altruistic things”. There’s a big difference in the composition of those groups. I also think there is a trade-off in terms of how community building resources are spent, but the thing about trying to encourage influence is that it doesn’t need to trade-off with highly engaged EAs. One analogy is that encouraging people to donate 10% doesn’t mean that someone like SBF can’t pledge 99%.
The counterargument is that impact per person tends to be long-tailed. For example, the net worth of Sam Bankman Fried is ~100,000 higher than a typical person. Therefore, who is in EA might matter as much or more as how many EAs there are.
Yup, agreed. This is my model as well. That being said, I wouldn’t be surprised if the impact of influence also follows a long-tailed distribution: imagine if we manage to influence 1,000 people about the importance of AI-related x-risk, and one of them actually ends up being the one to push for some highly impactful policy change.
It’s not clear to me whether quality or quantity is more important because some of the benefits are hard to quantify. One easily measurable metric is donations: adding a sufficiently large number of average donators should have the same financial value as adding a single billionaire.
Agreed. I’m similarly fuzzy on this and would really appreciate if someone did more analysis on this rather than deferring to the meme that EA is growing too fast/slow.
I suppose this mostly has to do with growing the size of the “EA community”, whereas I’m mostly thinking about growing the size of “people doing effectively altruistic things”. There’s a big difference in the composition of those groups. I also think there is a trade-off in terms of how community building resources are spent, but the thing about trying to encourage influence is that it doesn’t need to trade-off with highly engaged EAs. One analogy is that encouraging people to donate 10% doesn’t mean that someone like SBF can’t pledge 99%.
Yup, agreed. This is my model as well. That being said, I wouldn’t be surprised if the impact of influence also follows a long-tailed distribution: imagine if we manage to influence 1,000 people about the importance of AI-related x-risk, and one of them actually ends up being the one to push for some highly impactful policy change.
Agreed. I’m similarly fuzzy on this and would really appreciate if someone did more analysis on this rather than deferring to the meme that EA is growing too fast/slow.