I was talking with a new university group organizer recently, and the topic of heavy-tailed impact came up. Here I’ll briefly explain what heavy tails are and what I think they imply about university group community building.
What’s a heavy tail?
In certain areas, the (vast) majority of the total effect comes from a (small) minority of the causes. In venture capital, for example, a fund will invest in a portfolio of companies. Most are expected to fail completely. A small portion will survive but not change significantly in value. Just one or two will hopefully grow a lot, not only compensating for the failures, but returning the value of the fund multiple times over. These one or two companies can determine the overall return of the fund.
How does this apply to community building?
A few people that come out of your university group may well end up being responsible for the vast majority of your group’s impact. Those people may be extraordinarily high earners, top AI safety researchers, or strong leaders who build up effective animal advocacy organizations. Group members who aren’t in this category can certainly end up having meaningful impact, but they are not the primary drivers of the “return” of your “portfolio.”
If you could just find those top people and do everything possible to make sure they ended up succeeding, that would be the best thing to do. The problem is, you don’t know who is going to be on the tail. You don’t know for sure if interpretability or RLHF is a more promising alignment direction, or if people should be working on fish or insect welfare. You don’t know who is going to earn a bunch of money or who would actually donate it (well) once they do.
The goal is to find and support people who could plausibly end up being on the tail end of impact, just as the venture capitalist invests in all the companies that have a shot at increasing a lot in value very quickly.
To me, this means starting with broad outreach for introductory programs, with some special focus on groups that likely have extra talented people (Stamps Scholars at Georgia Tech, for example). It’s important not to select too harshly yet, because many people who have a serious shot at being on the tail are not in these groups, especially if you’re already at an institution that selects for a higher baseline level of talent. Also, the cost of missing out on a big hit is much higher than the cost of cultivating someone who doesn’t end up having much of an impact. This type of broad outreach also gets rid of some of the icky elitism feelings people sometimes have when talking about heavy-tailed impact.
Introductory programs are great because 1) they help participants understand the project of effective altruism and 2) they help facilitators figure out who might end up on the tail. Those who show up, do the readings, and engage thoughtfully and critically with ideas are all worth investing in. The important idea here is that it’s probably not worth trying to invest in people who don’t fit in that category. Design your programming to support those with interest, an open mind, and a desire to learn. Others may attend the occasional social or discussion event, which is absolutely fine, but don’t waste your time trying to convince them to do more seminars just to have more people participating. These people may eventually grow and change in ways that make them more interested in doing impactful work. My guess is that having introduced someone to EAs and EA thinking meaningfully increases the probability that they engage with the community if and when this occurs, without trying to push ideas on them before they’re ready. This increased likelihood of engagement with the community makes them more likely to end up on the tail end of impact.
What this doesn’t mean
I want to emphasize again that the idea that community building is heavy-tailed doesn’t mean that you should find only the best students at your university to join the introductory program. If you think you can predict who will end up being the most engaged participants, and you don’t want the less engaged to ruin the atmosphere for the others, form groups based on expected engagement and still provide a cohort for the bottom group. Only cut applicants who didn’t answer your questions or seem problematic. Running a marginal cohort is super low cost, and you could very well find someone great.
You can, if you want to, still maintain a perception of selectivity and/or formality through an application process and consistent, high-quality communication. And the selectivity thing can still be accurate – you’re just picking people to be in the strongest cohorts instead of picking people to accept.
I was talking with a new university group organizer recently, and the topic of heavy-tailed impact came up. Here I’ll briefly explain what heavy tails are and what I think they imply about university group community building.
What’s a heavy tail?
In certain areas, the (vast) majority of the total effect comes from a (small) minority of the causes. In venture capital, for example, a fund will invest in a portfolio of companies. Most are expected to fail completely. A small portion will survive but not change significantly in value. Just one or two will hopefully grow a lot, not only compensating for the failures, but returning the value of the fund multiple times over. These one or two companies can determine the overall return of the fund.
How does this apply to community building?
A few people that come out of your university group may well end up being responsible for the vast majority of your group’s impact. Those people may be extraordinarily high earners, top AI safety researchers, or strong leaders who build up effective animal advocacy organizations. Group members who aren’t in this category can certainly end up having meaningful impact, but they are not the primary drivers of the “return” of your “portfolio.”
If you could just find those top people and do everything possible to make sure they ended up succeeding, that would be the best thing to do. The problem is, you don’t know who is going to be on the tail. You don’t know for sure if interpretability or RLHF is a more promising alignment direction, or if people should be working on fish or insect welfare. You don’t know who is going to earn a bunch of money or who would actually donate it (well) once they do.
The goal is to find and support people who could plausibly end up being on the tail end of impact, just as the venture capitalist invests in all the companies that have a shot at increasing a lot in value very quickly.
To me, this means starting with broad outreach for introductory programs, with some special focus on groups that likely have extra talented people (Stamps Scholars at Georgia Tech, for example). It’s important not to select too harshly yet, because many people who have a serious shot at being on the tail are not in these groups, especially if you’re already at an institution that selects for a higher baseline level of talent. Also, the cost of missing out on a big hit is much higher than the cost of cultivating someone who doesn’t end up having much of an impact. This type of broad outreach also gets rid of some of the icky elitism feelings people sometimes have when talking about heavy-tailed impact.
Introductory programs are great because 1) they help participants understand the project of effective altruism and 2) they help facilitators figure out who might end up on the tail. Those who show up, do the readings, and engage thoughtfully and critically with ideas are all worth investing in. The important idea here is that it’s probably not worth trying to invest in people who don’t fit in that category. Design your programming to support those with interest, an open mind, and a desire to learn. Others may attend the occasional social or discussion event, which is absolutely fine, but don’t waste your time trying to convince them to do more seminars just to have more people participating. These people may eventually grow and change in ways that make them more interested in doing impactful work. My guess is that having introduced someone to EAs and EA thinking meaningfully increases the probability that they engage with the community if and when this occurs, without trying to push ideas on them before they’re ready. This increased likelihood of engagement with the community makes them more likely to end up on the tail end of impact.
What this doesn’t mean
I want to emphasize again that the idea that community building is heavy-tailed doesn’t mean that you should find only the best students at your university to join the introductory program. If you think you can predict who will end up being the most engaged participants, and you don’t want the less engaged to ruin the atmosphere for the others, form groups based on expected engagement and still provide a cohort for the bottom group. Only cut applicants who didn’t answer your questions or seem problematic. Running a marginal cohort is super low cost, and you could very well find someone great.
You can, if you want to, still maintain a perception of selectivity and/or formality through an application process and consistent, high-quality communication. And the selectivity thing can still be accurate – you’re just picking people to be in the strongest cohorts instead of picking people to accept.