Update on cause area focus working group

Prompted by the FTX collapse, the rapid progress in AI, and increased mainstream acceptance of AI risk concerns, there has recently been a fair amount of discussion among EAs whether it would make sense to rebalance the movement’s portfolio of outreach/​recruitment/​movement-building activities away from efforts that use EA/​EA-related framings and towards projects that instead focus on the constituent causes. In March 2023, Open Philanthropy’s Alexander Berger invited Claire Zabel (Open Phil), James Snowden (Open Phil), Max Dalton (CEA), Nicole Ross (CEA), Niel Bowerman (80k), Will MacAskill (GPI), and myself (Open Phil, staffing the group) to join a working group on this and related questions.

In the end, the group only ended up having two meetings, in part because it proved more difficult than expected to surface key action-relevant disagreements. Prior to the first session, participants circulated relevant memos and their initial thoughts on the topic. The group also did a small amount of evidence-gathering on how the FTX collapse has impacted the perception of EA among key target audiences. At the end of the process, working group members filled in an anonymous survey where they specified their level of agreement with a list of ideas/​hypotheses that were generated during the two sessions.[1] This included many proposals/​questions for which this group/​its members aren’t the relevant decision-makers, e.g. proposals about actions taken/​changes made by various organisations. The idea behind discussing these wasn’t for this group to make any sort of direct decisions about them, but rather to get a better sense of what people thought about them in the abstract, in the hope that this might sharpen the discussion about the broader question at issue.

Some points of significant agreement:

  • Overall, there seems to have been near-consensus that relative to the status quo, it would be desirable for the movement to invest more heavily in cause-area-specific outreach, at least as an experiment, and less (in proportional terms) in outreach that uses EA/​EA-related framings. At the same time, several participants also expressed concern about overshooting by scaling back on forms of outreach with a strong track-record and thereby “throwing out the baby with the bathwater”, and there seems to have been consensus that a non-trivial fraction of outreach efforts that are framed in EA terms are still worth supporting.

    • Consistently with this, when asked in the final survey to what extent the EA movement should rebalance its portfolio of outreach/​recruitment/​movement-building activities away from efforts that use EA/​EA-related framings and towards projects that instead focus on the constituent causes, responses generally ranged from 6-8 on a 10-point scale (where 5=stick with the status quo allocation, 0=rebalance 100% to outreach using EA framings, 10=rebalance 100% to outreached framed in terms of constituent causes), with one respondent selecting 310.

  • There was consensus that it would be good if CEA replaced one of its (currently) three annual conferences with a conference that’s explicitly framed as being about x-risk or AI-risk focused conference. This was the most concrete recommendation to come out of this working group. My sense from the discussion was that this consensus was mainly driven by people agreeing that there would be value of information to be gained from trying this; I perceived more disagreement about how likely it is that this would prove a good permanent change.

    • In response to a corresponding prompt (“ … at least one of the EAGs should get replaced by an x-risk or AI-risk focused conference …”), answers ranged from 7-9 (mean 7.9), on a scale where 0=very strongly disagree, 5=neither agree nor disagree, 10=very strongly agree.

  • There was consensus that CEA should continue to run EAGs.

    • In response to the prompt “CEA should stop running EAGs, at least in their current form”, all respondents selected responses between 1-3 (on a scale where 0=strongly disagree, 5=neither agree nor disagree, 10=strongly agree).

    • Note that there is some potential tension between this and the fact that (as discussed below) three respondents thought that CEA should shift to running only conferences that are framed as being about specific cause areas/​sub-questions (as opposed to about EA). Presumably, the way to reconcile this is that according to these respondents, running EAGs (including in their current form) would still be preferable to running no conferences at all, even though running conferences about specific cause areas would be better.

  • There was consensus that EAs shouldn’t do away with the term “effective altruism.”

    • Agreement with the prompt “We (=EAs) should “taboo” the term “effective altruism” ranged from 0-3, on a scale where 0=very strongly disagree, 5=neither agree nor disagree, 10=very strongly agree.

  • There was consensus that the damage to the EA brand from the FTX collapse and associated events has been meaningful but non-catastrophic.

    • On a scale where 0=no damage, 5=moderate damage, 10=catastrophic damage, responses varied between 3-6, with a mean of 4.5 and a mode of 410.

  • There was near-consensus that Open Phil/​CEA/​EAIF/​LTFF should continue to fund EA group organisers.

    • Only one respondent selected 510 in response to the prompt “Open Phil/​CEA/​EAIF/​LTFF should stop funding EA group organisers”, everyone else selected numbers between 1-3 (on a scale where 0=strongly disagree, 5=neither agree nor disagree, 10=strongly agree).

  • There was near-consensus that Open Phil should generously fund promising AI safety community/​movement-building projects they come across, and give significant weight to the value of information in doing so.

    • Seven respondents agreed with a corresponding prompt (answers between 7-9), one neither agreed nor disagreed.

  • There was near-consensus that at least for the foreseeable future, it seems best to avoid doing big media pushes around EA qua EA.

    • Seven respondents agree with a corresponding prompt (answers between 6-8), and only one disagreed (4).

Some points of significant disagreement:

  • There was significant disagreement whether CEA should continue to run EAGs in their current form (i.e. as conferences framed as being about effective altruism), or whether it would be better for them to switch to running only conferences that are framed as being about specific cause areas/​subquestions.

    • Three respondents agreed with a corresponding prompt (answers between 6-9), i.e. agreed that EAGs should get replaced in this manner; the remaining five disagreed (answers between 1-4).

  • There was significant disagreement whether CEA should rename the EA Forum to something that doesn’t include the term “EA” (e.g. “MoreGood”).

    • Three respondents agreed with a corresponding prompt (answers between 6-8), i.e. thought that the Forum should be renamed in such a way, the remaining five disagreed (answers between 1-4).

  • There was significant disagreement whether 80k (which was chosen as a concrete example to shed light on a more general question that many meta-orgs run into) should be more explicit about its focus on longtermism/​existential risk.

    • Five respondents agreed with a corresponding prompt (answers between 6-10), two respondents disagreed (answers between 2-4), one neither agreed nor disagreed.

    • Relatedly, in response to a more general prompt about whether a significant fraction of EA outreach involves understating the extent to which these efforts are motivated by concerns about x-risk specifically in a way that is problematic, 6 respondents agreed (answers between 6-8) and two disagreed (both 3).

  • There was significant disagreement whether OP should start a separate program (distinct from Claire’s and James’ teams) focused on “EA-as-a-principle”/​”EA qua EA”-grantmaking.

    • Five respondents agreed with a corresponding prompt (answers between 6-9), two respondents disagreed (answers between 2-4), one neither agreed nor disagreed.

As noted above, this wasn’t aiming to be a decision-making group (instead, the goal was to surface areas of agreement and disagreement from different people and teams and shed light on potential cruxes where possible), so the working group per se isn’t planning particular next steps. That said, a couple next steps that are happening that are consistent with the themes of discussion above are:

  • CEA (partly prompted by Open Phil) has been exploring the possibility of switching to having one of the EAG-like events next year be explicitly focused on existential risk, as touched on above.

  • More generally, Open Phil’s Longtermist EA Community Growth team expects to rebalance its field-building investments by proportionally spending more on longtermist cause-specific field building and less on EA field building than in the past, though it’s currently still planning to continue to invest meaningfully in EA field building, and the exact degree of rebalancing is still uncertain. (The working group provided helpful food for thought on this, but the move in that direction was already underway independently.)

I’m not aware of any radical changes planned by any of the participating organisations, though I expect many participants to continue thinking about this question and monitoring relevant developments from their own vantage points.

  1. ^

    Respondents were encouraged to go with their off-the-cuff guesses and not think too hard about their responses, so these should be interpreted accordingly.