Cause-Generality Is Hard If Some Causes Have Higher ROI

Summary

  1. Returns to community building are higher in some cause areas than others

    1. For example: a cause-general university EA group is more useful for AI safety than for global health and development.

  2. This presents a trilemma: community building projects must either:

    1. Support all cause areas equally at a high level of investment, which leads to overinvestment in some cause areas

    2. Support all cause areas equally at a low level of investment, which leads to underinvestment in some cause areas, or

    3. Break cause-generality

  3. This trilemma feels fundamental to EA community building work, but I’ve seen relatively little discussion of it, and therefore would like to raise awareness of it as a consideration

  4. This post presents the trilemma, but does not argue for a solution

Background

  1. A lot of community building projects have a theory of change which aims to generate labor

  2. Labor is more valuable in some cause areas than others

    1. It’s slightly hard to make this statement precise, but it’s something like: the output elasticity of labor (OEL) depends on cause area

    2. E.g. the amount by which animal welfare advances as a result of getting one additional undergraduate working on it is different than the amount by which global health and development advances as a result of getting one additional undergraduate working on it[1]

    3. Note: this is not a claim that some causes are more valuable than others; I am assuming for the sake of this post that all causes are equally valuable

  3. I will take as given that this difference exists now and is going to exist into the future (although I would be interested to hear arguments that it doesn’t/​won’t)

  4. Given this, what should we do?

  5. My goal with this post is mostly to point out that we probably should do something weird, and less about suggesting a specific weird thing to do

What concretely does it mean to have lower or higher OEL?

I’m using CEA teams as examples since that’s what I know best, though I think similar considerations apply to other programs. (Also, realistically, we might decide that some of these are just too expensive if OEL goes down or redirect all resources to some projects with high starting cost if OEL goes up.)

ProgramHow it looks with high investment[2]How it looks with low investment
Events

Catered

Coffee/​drinks/​snacks

Recorded talks

Convenient venues

Bring your own food

Venues in inconvenient locations

Unconference/​self-organized picnic vibes

Groups

Paid organizers

One-on-one advice/​career coaching

Volunteer-organized meet ups

Maybe some free pizza

Online

Actively organized Forum events (e.g. debates)

Curated newsletter, highlights

Paid Forum moderators

Engineers and product people who develop the Forum

A place for people to post things when they feel like it, no active solicitation

Volunteer-based moderation

Limited feature development

Communications

Pitching op-ed’s/​stories to major publications

Create resources like lists of experts that journalists can contact

Fund publications (e.g. Future Perfect)

People post stuff on Twitter, maybe occasionally a journalist will pick it up

What are Community Builders’ options?

I see a few possibilities:

  1. Don’t change our offering based on the participant’s[3] cause area preference

    1. …through high OEL cause areas subsidizing the lower OEL cause areas

      1. This has historically kind of been how things have worked (roughly: AI safety subsidized cause-general work while others free-rode)

      2. This results in spending more on the low OEL cause areas than is optimal

      3. And also I’m not sure if this can practically continue to exist, given funder preferences

    2. …through everyone operating at the level low OEL cause areas choose

      1. This results in spending less on high OEL cause areas than is optimal

      2. I’m also not sure how sustainable this is – e.g. if EA events are a lot less nice than AI safety events, AI safety people might just stop going to EA events[4]

    3. …through choosing some middle ground between what the low and high OEL cause areas want

      1. This results in inefficiencies on both sides

  2. Change our offering based on the participant’s cause area

    1. I explore this below

Can this be mitigated by moral trade?

  1. It seems to me like there are some opportunities for moral trade. E.g. if you have a university group, then maybe the Econ students go to GH&D, psychology students to digital sentience, etc. since these are the cause areas in which they have the strongest comparative advantage.

  2. Jonas suggests that working in cause areas other than your main one can sharpen skills and remove insularity.

  3. Historically, more speculative causes have benefited from being attached to less speculative ones by being able to point to the latters’ achievements as examples of actually doing something useful

    1. (Though this also has bad effects)

  4. There is potentially opportunity for moral trade on the individual level (e.g. I am a fit for biosecurity but want to work on animal welfare, I trade with someone who has the opposite skill set), which makes the value of individuals’ labor less dependent on their cause area preferences.[5]

  5. I think the above mitigates some of the cause area differences, but I think we are still inevitably going to end up with substantial differences between cause areas. Some reasons why this seems inevitable:

    1. Different cause areas will have different existing levels of capital and labor

    2. Different cause areas will require different balances of capital versus labor (e.g. biology research might require expensive lab equipment, whereas global priorities research mostly just requires labor)

    3. Different cause areas will require different types of labor (notably, some cause areas might not value a randomly chosen undergraduate very much at all)

  6. It would be surprising if all of these factors perfectly canceled out

What if we could have our cake and eat it too?

  1. EA seems to be a very memetically fit set of ideas, perhaps more so than any individual cause area

  2. For example: I have heard from some AI safety university group organizers that, even though the vast majority of their group members have no interest in EA, amongst the ones who actually go on to have a career in AI safety a large fraction are EA-involved

  3. It would be extremely convenient if the best way to generate labor for a specific cause area was a cause-neutral presentation of EA ideas

  4. My guess is that cause-neutral activities are 30-90% as effective as cause-specific ones (in terms of generating labor for that specific cause), which is remarkably high, but still less than 100%

    1. e.g. spending $100 on an EA group will get you 30-90% as much labor for animal welfare as spending $100 on an animal welfare group would

  5. So I think the trade-offs here are less severe than one might expect, but still enough to mean that we have to prioritize

What would it look like to change our offering based on the participant’s cause area?

Note: cause area is not solely a self-reported preference, but also your impact in that cause area. Some people might really prioritize a cause area, but be unlikely to contribute to it (or vice versa), and this would presumably be considered.

ProgramWhat changing the offering based on cause area could look like
Events

Having cause area-specific events

Having a different admissions bar based on the applicant’s cause area

Having a different ticket price that the attendee needs to pay based on the applicant’s cause area

Groups

Having cause area-specific groups

Giving differential support to group members based on cause area (e.g. group organizers are paid to organize some types of events but not others, or group organizers give one-on-one advising only to people interested in certain cause areas)

OnlineProactively generate and curate content from some cause areas; others are just driven by whatever people want to upload
CommunicationsPush stories and journalist resources for some cause areas, but not others

Some possible flow-through effects of changing our offering based on the participant’s cause area

Negative effects:

  1. (More) people lying about their cause area preferences in order to receive more favorable treatment

  2. People working on lower OEL cause areas become more elite (e.g. only the top 5% of animal rights advocates get into EAG, but the top 30% of AI safety workers get in, meaning that AR attendees are more elite than AI safety ones), leading to weird social dynamics

  3. Lower OEL cause area aficionados being bitter about having a worse experience despite being equally (or more) dedicated, talented, etc.

    1. Also general exacerbation of the complaints we already hear about elitism

  4. (Not a complete list)

Positive effects:

  1. People rationally adjusting their career plans in response to “price signals”

    1. Importantly including people switching to earning to give because they realize their cause area has more labor than capital

  2. Less “bait and switch” vibe/​complaints about intro materials – we are up front that some career paths are more valuable than others

  3. (Maybe) more efficient allocation of capital and labor across cause areas

  4. (Not a complete list)

Do we actually have to solve this now?

  1. Explicitly choosing any branch of this trilemma is going to upset a lot of people

  2. There is therefore a strong temptation to ignore the problem

  3. But, of course, ignoring the problem just means implicitly choosing one branch of the trilemma

  4. My guess is that explicitly choosing a branch will result in a better outcome

  5. I am therefore interested in discussion on this topic. Note that CEA is one logical entity who can make this choice, but approximately everyone involved in cause-general EA community building faces this trilemma, and I expect that e.g. different group organizers will choose different solutions

Thanks to Chana Messinger for suggesting this memo and Chana, Jake McKinnon, Gina Stuessy, Saul Munn, Charles He, Campbell Jordan, and Lizka Vaintrob for helpful feedback

  1. ^

    In some ways asking about OEL for randomly chosen undergrads is assuming the answer to the question. E.g. we would get different answers if the question was about the value of a randomly chosen developmental economist. Nonetheless, I think there is still some useful sense in which some cause areas generally get more value from labor than other cause areas.

  2. ^

    For simplicity, I’m assuming that the optimal level of investment correlates perfectly with the output elasticity of labor, but obviously this isn’t true. Notably, labor supply may be more or less responsive to changes in investment.

  3. ^

    Participant = attendee for events, group member for groups, etc.

  4. ^

    It’s not clear to me that this is true, and I would be interested in evidence in either direction. There are certainly many anecdotal examples of high net worth EA’s being perfectly willing to attend a conference in a rundown hotel, for example. But I do have a fairly strong prior that you can usually accomplish things by spending money, so if you spend less money you will be less able to accomplish things like “attract people from XYZ group”.

  5. ^

    Even if this theoretically works though, I expect it to be difficult in practice. E.g. it’s hard for people to maintain a motivation to work on something they don’t care about but are doing just for moral trade reasons, and it’s hard for each side of a match like this to actually find each other.