I do community building with Effective Altruism at Georgia Tech. My primary focus areas are animal welfare and artificial intelligence.
Pete Rowlett
Different group organizers have widely varying beliefs that affect what work they think is valuable. From certain perspectives, work that’s generally espoused by EA orgs looks quite negative. For example, someone may believe that the harms of global health work through the meat eater problem dominate the benefits of helping reduce human suffering and saving lives. Someone may believe that the expected value of the future with humans is negative, and as such, biosecurity work that reduces human extinction risk is net-negative. In this post I’ll briefly consider how this issue can affect how CBs do their work.
Obligations to others
Since many major EA orgs and community members provide support to groups, there may be obligations to permit and/or support certain areas of work in the group. Open Phil, for example, funds EA groups and supports biosecurity work. There’s no mandate that organizers conduct any particular activities, but it’s unclear to me what degree of support for certain work is merited. It currently seems to me that there is no obligation to support work in any given area (e.g. running a biosecurity seminar), but there may be an obligation to not prevent another organizer from engaging in that activity. This seems like a simple solution, but there is some moral conflict when one organizer is providing background support such as managing finances, conducting outreach, and running social events that facilitate the creation and success of the controversial work.
Deferring
CBs could choose to accept that we (generally) aren’t philosophy PhDs or global priorities researchers and weigh the opinions of those people and the main organizations that employ them heavily. This sort of decision making attempts to shift responsibility to other actors and can contribute to the problem of monolithic thinking.
Gains from trade
Maybe the organizers of groups A, B, and C, think that the meat eater problem makes global health work net negative, but the organizers of groups D, E, and F prioritize humans more, which makes global health look positive. If everyone focuses on their priorities, organizers from A, B, and C miss out on great animal welfare promoters from D, E, and F, and organizers from D, E, and F miss out on great global health supporters from A, B, and C. On the other hand, if everyone agrees to support and encourage both priorities, everyone’s group members get into their comparative advantage areas and everyone is better off. This plan does ignore counteracting forces between interventions and the possibility that organizers will better prepare people for areas that they believe in. Coordinating this sort of trade also seems quite difficult.
Conclusion
I don’t see a simple way to solve these issues. My current plan is to reject the “deferring” solution, not prevent other organizers from working on controversial areas, accept that I’ll be providing them with background support, and focus on making, running, and sharing programming that reflects my suffering-focused values.
Fantastic post, thank you for writing it! One challenge I have with encouraging effective giving, especially with a broader non-EA crowd, is that global health and development will probably be the main thing people end up giving to. I currently don’t support that work because of the meat eater problem. If you have any thoughts on dealing with this, I’d love to hear them.
Some arguments to support global health work despite the meat eater problem that I see are:
“People in low-income countries that are being helped with Givewell-style interventions don’t actually eat many animal products.” (I think this is true relative to people in high-income countries, but I don’t think the amounts are negligible, and are very likely sufficient to override the positive impact of helping people. This post is the type of analysis I’m thinking of. Some commenters rejected the whole line of reasoning, but I do think it’s relevant here.)
“Some interventions improve lives more than save lives, so those that benefit don’t eat more animal products.” (Which ones are most in this category? Will I end up getting people to donate to these, or would they still end up donating to the others?)
“People are more likely to accept arguments focused on nonhuman animal welfare when they have healthier and more stable lives.” (This one feels a bit fluffy to me despite some plausible degree of truth. I think some people will change in that way, but not enough to compensate for the added harm.)
“Those that start doing effective giving with global health nonprofits will be more likely to engage with animal advocacy and other suffering-focused work in the future.” (This argument is more convincing than any of the others. Personally I haven’t seen someone who didn’t learn about effective global health giving through EA do it, but I could see myself buying into this idea with more evidence, even some anecdotes.)
I was talking with a new university group organizer recently, and the topic of heavy-tailed impact came up. Here I’ll briefly explain what heavy tails are and what I think they imply about university group community building.
What’s a heavy tail?
In certain areas, the (vast) majority of the total effect comes from a (small) minority of the causes. In venture capital, for example, a fund will invest in a portfolio of companies. Most are expected to fail completely. A small portion will survive but not change significantly in value. Just one or two will hopefully grow a lot, not only compensating for the failures, but returning the value of the fund multiple times over. These one or two companies can determine the overall return of the fund.
How does this apply to community building?
A few people that come out of your university group may well end up being responsible for the vast majority of your group’s impact. Those people may be extraordinarily high earners, top AI safety researchers, or strong leaders who build up effective animal advocacy organizations. Group members who aren’t in this category can certainly end up having meaningful impact, but they are not the primary drivers of the “return” of your “portfolio.”
If you could just find those top people and do everything possible to make sure they ended up succeeding, that would be the best thing to do. The problem is, you don’t know who is going to be on the tail. You don’t know for sure if interpretability or RLHF is a more promising alignment direction, or if people should be working on fish or insect welfare. You don’t know who is going to earn a bunch of money or who would actually donate it (well) once they do.
The goal is to find and support people who could plausibly end up being on the tail end of impact, just as the venture capitalist invests in all the companies that have a shot at increasing a lot in value very quickly.
To me, this means starting with broad outreach for introductory programs, with some special focus on groups that likely have extra talented people (Stamps Scholars at Georgia Tech, for example). It’s important not to select too harshly yet, because many people who have a serious shot at being on the tail are not in these groups, especially if you’re already at an institution that selects for a higher baseline level of talent. Also, the cost of missing out on a big hit is much higher than the cost of cultivating someone who doesn’t end up having much of an impact. This type of broad outreach also gets rid of some of the icky elitism feelings people sometimes have when talking about heavy-tailed impact.
Introductory programs are great because 1) they help participants understand the project of effective altruism and 2) they help facilitators figure out who might end up on the tail. Those who show up, do the readings, and engage thoughtfully and critically with ideas are all worth investing in. The important idea here is that it’s probably not worth trying to invest in people who don’t fit in that category. Design your programming to support those with interest, an open mind, and a desire to learn. Others may attend the occasional social or discussion event, which is absolutely fine, but don’t waste your time trying to convince them to do more seminars just to have more people participating. These people may eventually grow and change in ways that make them more interested in doing impactful work. My guess is that having introduced someone to EAs and EA thinking meaningfully increases the probability that they engage with the community if and when this occurs, without trying to push ideas on them before they’re ready. This increased likelihood of engagement with the community makes them more likely to end up on the tail end of impact.
What this doesn’t mean
I want to emphasize again that the idea that community building is heavy-tailed doesn’t mean that you should find only the best students at your university to join the introductory program. If you think you can predict who will end up being the most engaged participants, and you don’t want the less engaged to ruin the atmosphere for the others, form groups based on expected engagement and still provide a cohort for the bottom group. Only cut applicants who didn’t answer your questions or seem problematic. Running a marginal cohort is super low cost, and you could very well find someone great.
You can, if you want to, still maintain a perception of selectivity and/or formality through an application process and consistent, high-quality communication. And the selectivity thing can still be accurate – you’re just picking people to be in the strongest cohorts instead of picking people to accept.
Pete Rowlett’s Quick takes
Conflicting Effects of Existential Risk Mitigation Interventions
My current belief in the sentience of most nonhuman animals comes partly from the fact that they were subjected to many of the same evolutionary forces that gave consciousness to humans. Other animals also share many brain structures with us. ChatGPT never went through that process and doesn’t have the same structures, so I wouldn’t really expect it to be conscious. I guess your post looks at the outputs of conscious beings, which are very similar to what ChatGPT produces, whereas I’m partly looking at the inputs that we know have created consciousness.
Just my two cents. And I do think this is a worthwhile question to ask! But I would probably update more in the direction of “digital sentience is a (future) possibility” than “more nonhuman animals probably aren’t conscious”.
I’ve addressed the point on costs in other commentary, so we may just disagree there!
I think the core idea is that the EA ethos is about constantly asking how we can do the most good and updating based on new information. So the book would hopefully codify that spirit rather than just talk about how great we’re doing.
I find it easier to trust people whose motivations I understand and who have demonstrated strong character in the past. History can give a better sense of those two things. Reading about Julia Wise in Strangers Drowning, for example, did that for me.
Humans often think about things in terms of stories. If you want someone to care about global poverty, you have a few ways of approaching it. You could tell them how many people live in extreme poverty and that by donating to GiveDirectly they’ll get way more QALYs per dollar than they would by donating elsewhere. You could also tell them about your path to donating, and share a story from the GiveDirectly website about how a participant benefited the money they received. In my experience, that’s the better strategy. And absolutely, the EA community exists to serve a purpose. Right now I think it’s reasonably good at doing the things that I care about, so I want it to continue to exist.
Agreed!
I think there could be a particular audience for this book, and it likely wouldn’t be EA newbies. The project could also take on a lot of different forms, from empirical report to personal history, depending on the writer. Hopefully the right person sees this and decides to go for it if and when it makes sense! Regardless, your commentary is appreciated.
Great point! A historian or archivist could take on this role. Maybe CEA could hire one? I’d say it fits within their mission “to nurture a community of people who are thinking carefully about the world’s biggest problems and taking impactful action to solve them.”
Definitely agree with Chris here! Worst case scenario, you create useful material for someone else who tackles it down the line, best case scenario, you write the whole thing yourself.
I think opportunity cost is well worth mentioning, but I don’t know that I think it’s as high as you believe it to be.
Choosing someone who has been around a while is optional. The value of having an experienced community member do it is built-in trust, access, and understanding. The costs are the writer’s time (though that cost is decreasing as more people start writing about EA professionally) and the time of those being interviewed. I would also note that while there’s lots of work for technical people in EA, writers in the community may not have found such great opportunities for impact.
Having a relative outsider take on the project would add objectivity, as Dylan noted. Objectivity would both improve credibility to outsiders and increase that likelihood of robust criticism being made. I also think there are just a lot of pretty great writers in the world who might find EA interesting. Perhaps you just get different benefits from different types of writers.
There’s a cost to waiting as well. The longer you wait, the more likely it is that important parts of the story will be forgotten or deleted.
Someone should write a detailed history of effective altruism
I agree with this last point on underlying motives. EA is one direction for purpose-seeking people to go in, but not everyone will choose it. This program could also look vaguely religious, which is generally preferable to avoid.
I would also question whether a focused program is the best way to develop people with EA motivation. I think sometimes people go through the intro program and find purpose in it because...
They see their peers struggling with the same questions about meaning and purpose
Their facilitator has found meaning through EA and are acting based on EA ideas
It’s grounded in an empirical context (“Wow, I didn’t realize that lots of people live on $2 a day, 70 billion land animals are slaughtered each year for no good reason, and AGI may pose an existential risk.”)
I do, however, want to say that I appreciate the thinking you’ve done here. The identifying vs. generating talent topic is one that I look forward to reading more about, including follow-ups to this post with results.
I think that stipends for intro fellows is an idea worth considering, but I have real concerns at the moment, especially since Penn’s write-up about it hasn’t come out yet.
1.1 “Makes Fellowships more accessible to people who are not wealthy, potentially leading to a more diverse community”
I think there’s probably some truth to this, but honestly, I don’t think an amount that we could give every fellow would allow anyone to meaningfully decrease the outside work they do. I’d be in support of packages for those that wouldn’t be able to participate without one, or for whom sacrificing several hours each week would cause stress. There could be a very easy, low-barrier-to-entry application, and it could be made clear that anyone who could use the money is highly encouraged to take it.1.2 “Incentivizes people to complete the Fellowship, in-creases accountability”
Likely true, but not necessarily positive. I don’t want people to come to the meetings for the money; I just don’t want money to be a barrier. Potentially decreasing the quality of fellows scares me. Also, too much accountability comes with a few problems. It could deter people from applying. The fellowship itself could feel more like a class and be less enjoyable. And organizers’ time, one of the hottest commodities in EA right now, would go to the administrative work of holding people to account. Failing to complete something and being held accountable for it could cause fellows to have negative sentiments toward EA, and whether that would be justified or not is irrelevant.1.3 “It is the norm for Fellowships to be paid, and us breaking this norm looks bad”
Agreed. Fellowship does have a nice ring to it, but the program really isn’t one. I wouldn’t mind changing the name. The only problem is that everyone uses it. I would like to see broader discussion on this issue, and I think the organization that I work with (EA at Georgia Tech) would be very open to changing our terminology.1.4 “Paid Fellowships appear more prestigious to outsiders helping Fellows spend more time on the Fellowship”
I think a paid EA intro fellowship would look better on the surface, but I don’t think there’s enough commitment in the intro fellowship to warrant much prestige anyway.1.5 “Fellows might put more effort into a program if they are being paid for it”
Agreed, generally, but I think the effect would be smaller in an EA fellowship than other programs. Also, it doesn’t matter that much if people put more effort into the fellowship if it doesn’t affect the likelihood that they’ll take effective actions. You can’t really pay people to be altruistic, and I’m uncertain about the degree to which you can educate people into being altruistic if the underlying philosophy or empathy isn’t there.2.1 “Might draw people to the Fellowship for the money rather than genuine interest”
I’m really concerned about this. As mentioned earlier, having high-quality fellows is a top priority. One reason I fell in love with EA during my fellowship was that my cohort had smart, thoughtful people who really cared about EA. Every member of the cohort that I’m currently facilitating is really into EA too. Having one disinterested person could really damage that experience for the others.2.2/2.3 “Seems odd to pay people who are reading moral arguments,” “Might make EA look bad as we are a community oriented around helping others”
Communicating EA ideas with fidelity is widely discussed for a reason. If hundreds of new fellows tell their families that they’re getting a significant amount of money to read about how to best help people, that could be very problematic. I have a hard enough time explaining EA myself.2.4 “Counterfactual use of money, could be better used elsewhere”
I’d rather give larger amounts of money to highly engaged EAs who organize or do other community building work so they can have more time/be more effective. There are a hundred things you could help interested people do with the money. These highly engaged people are the ones that will provide the most value by far, and are likely the recent hires that EA organizations would pay six figures for. They likely would have joined anyway after hearing our marketing. Also, the money could go to direct impact organizations rather than meta work. I think leading by example is important, and not wasting (or even just looking like we’re wasting) money is part of that. For me, this relates to veganism and sustainability. I know I don’t actually change that much by being vegan compared to donating to an effective animal advocacy nonprofit, but I do demonstrate a willingness to make sacrifices and gain credibility from it. Also, I read somewhere about GiveWell saying that their growth resulted mostly from outside people actively finding them rather than their own outreach. Others talked about GiveWell because they were so good at what they do, and I think this applies to the EA movement as a whole as well.I know there’s plenty of EA meta money sloshing around out there, but I think it’ll pay to be careful here. For anyone considering whether to offer a stipend, at least wait on deciding until the Penn write-up comes out. I certainly have an opinion at the moment, but I don’t think my personal experience here is nearly broad enough for me to make a very strong judgement, so that piece could switch my thinking. Thanks for reading, and I look forward to reading any responses.
I don’t think that the development of sentience (the ability to experience positive and negative qualia) is necessary for an AI to pursue goals. I’m also not sure what it would look like for an AI to select its own interests. This may be due to my own lack of knowledge rather than a real lack of necessity or possibility though.
To answer your main question, some have theorized that self-preservation is a useful instrumental goal for all sufficiently intelligent agents. I recommend reading about instrumental convergence. Hope this helps!