[T]hese seem to be exactly the same principles CEA has stated for years. If nothing about them is changing, then it doesnât give much reason to think that CEA will improve in areas it has been deficient to date. To quote probably-not-Albert-Einstein, âInsanity is doing the same thing over and over again and expecting different results.â
I really really wish âtransparencyâ would make the list again (am I crazy? I feel like it was on a CEA list in some form in the early days, and then was removed). I think there are multiple strong reasons for making transparency a core principle:
Thereâs a distinction between what an organization wants to achieve and how it wants to achieve it. The principles described in the original post are related to the what. They help us identify a set of shared beliefs that define the community we want to cultivate.
I think thereâs plenty of room for disagreement and variation over how we cultivate that community. Even as CEAâs mission remains the same, I expect the approach weâll use to achieve that mission will vary. Itâs possible to remain committed to these principles while also continuing to find ways to improve CEAâs effectiveness.
I view transparency as part of the how, i.e. I believe transparency can be a tool to achieve goals informed by EA principles, but I donât think itâs a goal in itself. Looking at the spectrum of approaches EA organizations take to doing good, Iâm glad that thereâs room in our community for a diversity of approaches. I think transparency is a good example of a value where organizations can and should commit to it at different levels to achieve goals inspired by EA principles, and as a result I donât think itâs a principle that defines the community.
For example, I think itâs highly valuable for GiveWell to have a commitment to transparency in order for them to be able to raise funds and increase trust in their charity evaluations, but I think transparency may cause active harm for impactful projects involving private political negotiations or infohazards in biosecurity. Transparency is also not costless, e.g. Open Philanthropy has repeatedlypublished pieces on the challenges of transparency. I think itâs reasonable for different individuals and organizations in the EA community to have different standards for transparency, and Iâm happy for CEA to support others in their approach to doing good at a variety of points along that transparency spectrum.
When it comes to CEA, I think CEA would ideally be more transparent and communicating with the community more, though I also donât think it makes sense for us to have a universal commitment to transparency such that I would elevate it to a âcore principle.â I think different parts of our work deserve different levels of transparency. For example:
I think CEA should communicate about programmatic goals, impacts, and major decisions, which weâve done before (see e.g. here)âbut I think we would ideally be doing more.
On the other end of the spectrum, there are some places where confidentiality seems like an obvious good to me, e.g. with some information that is shared with our Community Health Team. I donât expect this will be a novel idea for most readers, but I think itâs useful to illustrate that even for CEA, transparency isnât an unabated good.
Somewhere in between is something like the EAG admissions bar. We do share significant amounts of information about admissions, but as Amy Labenz (our Head of Events) has stated, we want to avoid situations where we share so much information that people can use it to game the admissions process. I think itâs worth us potentially investing more in similar meta-transparency around where we will and wonât expect to share information. I suspect the lack of total transparency will upset some members of the community (particularly those who arenât admitted to our events), but I think the tradeoffs are plausibly worth it.
I find the principles themselves quite handwavey, and more like applause lights than practical statements of intent. What does ârecognition of tradeoffsâ involve doing? It sounds like something that will just happen rather than a principle one might apply. Isnât âscope sensitivityâ basically a subset of the concerns implied by âimpartialityâ? Is something like âdo a counterfactually large amount of goodâ supposed to be implied by impartiality and scope sensitivity? If not, why is it not on the list? If so, why does âscout mindsetâ need to be on the list, when âthinking through stuff carefully and scrupulouslyâ is a prerequisite to effective counterfactual actions? On reading this post, Iâm genuinely confused about what any of this means in terms of practical expectations about CEAâs activities.
I feel quite strongly that these principles go beyond applause lights and are substantively important to EA. Instead of going into depth on all of the principles, Iâll point out that many others have spent effort articulating the principles and their value, e.g. here, here, and here.
To briefly engage with some of the points in your comment and explain how I see these principles holding value:
Impartiality and scope sensitivity can exist independently of each other. Many contemporary approaches to philanthropy are highly data-driven and seek to have more impact, but they arenât impartial with respect to their beneficiaries. As an example, the Gates Foundationâs US education program strikes me as an approach that is likely to be scope-sensitive without being impartial. Theyâre highly data-driven and want to improve US education as much as possible, but it seems likely to me that their focus on the US education as opposed to e.g. educational programs in Nigeria stems from Gates being in the US rather than an impartial consideration of all potential beneficiaries of their philanthropy.
I also think itâs possible to have impartiality without scope sensitivity. Animal shelters and animal sanctuaries strike me as efforts that reflect impartiality insofar as they value the wellbeing of a wide array of species, but they donât try to account for scope sensitivity (e.g. corporate campaigns are likely to improve the lives of orders of magnitude more animals per dollar).
I agree that a scout mindset and recognition of tradeoffs are important tools for doing counterfactually large amounts of good. I also think theyâre still wildly underutilized by the rest of the world. Stefan Schubertâs claim that the triviality objection is beside the point resonates with me. The goal of these principles isnât to be surprising, but rather to be action-guiding and effective at inspiring us to better help others.
âI view the community as CEAâs team, not its customersâ sounds like a way of avoiding ever answering criticisms from the EA community, and really doesnât gel with the actual focuses of CEA
I think itâs important to view the quote from the original post in the context of the following sentence: âWhile we often strive to collaborate and to support people in their engagement with EA, our primary goal is having a positive impact on the world, not satisfying community members (though oftentimes the two are intertwined).â I believe the goals of engaged community members and CEA are very frequently aligned, because I believe most community members strive to have a positive impact on the world. With that being said, if and when having a positive impact on the world and satisfying community members does come apart, we want to keep our focus on the broader mission.
I worry some from the comments in response to this post that people are concerned we wonât listen to or communicate with the community. My take is that as âteammates,â we actually want to listen quite closely to the community and have a two-way dialogue on how we can achieve these goals. With that being said, based on the confusion in the comments, I think it may be worth putting the analogy around âteammatesâ and âcustomersâ aside for the moment. Instead, let me say some concrete things about how CEA approaches engagement with the community:
I believe the majority of CEAâs impact flows through the community. In recent years, our decision making has placed the most emphasis on metrics around the number of positive career changes people have made as a result of our programs. We think the community has valuable input to give us on how we can help them help others, and we use their input to drive decisions. We frequently solicit feedback for this purpose, e.g. via our recent forum survey, or the surveys we run after most of our events.
The ultimate beneficiaries of our work are groups like children who would otherwise die from malaria, chickens who would otherwise suffer in cages, and people who might die or not exist due to existential catastrophes. I think these are populations that the vast majority of the EA community is concerned about as well. I see us as collaborating to achieve these goals, and I think CEA is best poised to achieve them by empowering people who share core EA principles.
While I think most people in EA would agree with the above goals, I do think at times that meta organizations have erred too far in the direction of trying to optimize for community satisfaction. I think this was particularly true during the FTX boom times, when significant amounts of money were spent in ways that, to my eyes, blurred the lines between helping the community do more good and just plain helping the community. See e.g. theseposts for some historical discussion.
Concretely, this affects how we evaluate CEAâs impact. For example, for events, our primary focus is on metrics like how many positive career changes occur as a result of our events, as opposed to attendee satisfaction. We do collect data on the latter and treat it as a useful input for our decision-making. Among other reasons, we believe itâs helpful because we think one of the things that satisfies many community members is when we help them improve their impact! But itâs an input, not the thing weâre optimizing for. We have made decisions that may make our events less of a pleasant experience (e.g. cutting back on meals and snack variety) but ultimately think that we can use these funds better elsewhere or that our donors can instead not give to CEA and redirect the funding to beneficiaries that both they and we care about.
Sometimes, approaches to serving different parts of the community are in tension with each other. To return to EAG admissions, I think Eli Nathan does a good job in this comment discussing how we both incorporate stakeholder feedback but donât optimize for making the community happy. Sometimes we have to make tough decisions on tradeoffs between how we support different parts of the community, and weâll use a mix of community input and our own judgment when doing so.
I think if anyone was best able to make a claim to be our customers, it would be our donors. Accountability to the intent behind their donations does drive our decision-making, as I discussed in the OP. I think itâs also important to note that I donât perceive this to be a change from CEAâs historical practices (if anything, I think this dynamic has become less pronounced with recent changes at Open Philanthropy and CEA, although I still am very unsure how it will shake out in the long run).
I still want us to invest more in communicating with the community. I suspect you and I have different takes on what we feel like the optimal level of communication and transparency is, but I do agree that CEA should directionally be communicating more. Our main bottleneck to doing so right now is bandwidth, not desire. (Weâre exploring ways to reduce that bottleneck but donât want to make promises.) I think itâs a good thing when we engage more, and Iâm supportive of efforts from our team to do so, whether thatâs through proactive posts from us or engaging with community critiques. The desire to be transparent was one of the original inspirations for doing this principles-first post.
I think the principles-first approach is good at recognizing the diversity of perspectives in our community and supporting individual community members in their own journey to do good. We regularly have forum posts, event attendees and speakers, and group members whose cause prioritization reflects choices I disagree with. I think thatâs good!
With that being said, if and when having a positive impact on the world and satisfying community members does come apart, we want to keep our focus on the broader mission.
I understand the primary concern posed in this comment to be more about balancing the views of donors, staff, and the community about having a positive impact on the world, rather than trading off between altruism and community self-interest. To my ears, some phrases in the following discussion make it sound like the communityâs concerns are primarily self-interested: âtrying to optimize for community satisfaction,â âjust plain helping the community,â âmake our events less of a pleasant experience (e.g. cutting back on meals and snack variety),â âdonât optimize for making the community happyâ for EAG admissions).
I donât doubt that yâall get a fair number of seemingly self-interested complaints from not-satisfied community members, of course! But I think modeling the communityâs concerns here as self-interested would be closer to a strawman than a steelman approach.
I think if anyone was best able to make a claim to be our customers, it would be our donors.
CEA receives many fewer resources from its donors than from the community. Again, CEA would not really have a job without the community. An organization like CEA would totally exist without your big donors (like, the basic institution of having an âEA leadership organizationâ requires a few hundred k per year, which you would be able to easily fundraise from a very small fraction of the community, and even at the current CEA burn-rate the labor-value of the people who are substantially directing their life based on the broader EA community vastly eclipses the donations to CEA).
Your donors seem obviously much less important of a stakeholder than the community which is investing you with the authority to lead.
First off, I want to thank you for taking what was obviously a substantial amount of time to reply (and also to Sarah in another comment that I havenât had time to reply to). This is, fwiw, is already well above the level of community engagement that Iâve perceived from most previous heads of CEA.
On your specific comments, itâs possible that we agree more than I expected. Nonetheless, there are still some substantial concerns they raise for me. In typical Crocker-y fashion, I hope youâll appreciate that me focusing on the disagreements for the rest of this comment doesnât imply that theyâre my entire impression. Should you think about replying to this, know that I appreciate your time, and I hope you feel able to reply to individual points without being morally compelled to respond to the whole thing. Iâm giving my concerns here as much for your and the communityâs information as with the hope of a further response.
> I view transparency as part of the how, i.e. I believe transparency can be a tool to achieve goals informed by EA principles, but I donât think itâs a goal in itself.
In some sense this is obviously true, but I believe itâs gerrymandering what the difference between âwhatâ and âhowâ actually is.
For example, to my mind âscout mindsetâ doesnât seem any more central a goal than âbe transparentâ. In the post by Peter you linked, his definition of it sounds remarkably like âbe transparentâ, to wit: âthe view that we should be open, collaborative, and truth-seeking in our understanding of what to doâ.
One can imagine a world where we should rationally stop exploring new ideas and just make the best of the information we have (this isnât so hard to imagine if itâs understood as a temporary measure to firefight urgent siutations), and where major charities can make substantial decisions without explanation and this tend to produce trustworthy and trusted policiesâbut I donât think we live in either world most of the time.
In the actual world, the community doesnât really know, for example with what weighting CEA priorities longtermist causes over others; how it priorities AI vs other longtermist causes, how it runs admissions at EAGs,;why some posts get tagged as âcommunityâ on the forum, and therefore effectively suppressed while similar ones stay at the top level; why the âcommunityâ tag has been made admin-editable-only; what the region pro rata rates CEA uses when contracting externally; what your funding breakdown looks like (or even the absolute amount); what the inclusion criteria for âleadershipâ forums is, or who the attendees are; or many many other such questions people in the community have urgently raised. And we donât have any regular venue for being able to discuss such questions and community-facing CEA policies and metrics with some non-negligible chance of CEA respondingâa simple weekly office hours policy could fix this.
> confidentiality seems like an obvious good to me, e.g. with some information that is shared with our Community Health Team
Confidentiality is largely unrelated to transparency. If in any context someone speaks to someone else in confidence, there have to be exceptionally good reasons for breaking that confidence. None of what Iâm pointing at in the previous paragraph would come close to asking them to do that.
> Amy Labenz (our Head of Events) has stated, we want to avoid situations where we share so much information that people can use it to game the admissions process.
I think this statement was part of the problem⊠We as a community have no information on which to evaluate the statement, and no particular reason to take it at face value. Are there concrete examples of people gaming the system this way? Is there empirical data showing some patterns that justify this assertion (and comparing it to the upsides)? I know experienced EA event organisers who explicitly claim sheâs wrong on this. As presented, Labenzâs statement is in itself a further example of lack of transparency that seems not to serve the communityâitâs a proclamation from above, with no follow-up, on a topic that the EA community would actively like to help out with if we were given sufficient data.
This raises a more general pointâtransparency doesnât just allow the community to criticise CEA, but enables individuals and other orgs to actively help find useful info in the data that CEA otherwise wouldnât have had the bandwidth to uncover.
> I think transparency may cause active harm for impactful projects involving private political negotiations or infohazards in biosecurity
These scenarios get wheeled out repeatedly for this sort of discussion (Chris Leong basically used the same ones elsewhere in this thread), but I find them somewhat disingenuous. For most charities, including all core-to-the-community EA charities, this is not a concern. I certainly hope CEA doesnât deal in biosecurity or international politicsâif it does, then the lack of transparency is much worse than I thought!
> Transparency is also not costless, e.g. Open Philanthropy has repeatedly published pieces on the challenges of transparency
All of the concerns they list there apply equally to all the charities that Givewell, EAFunds etc expect to be transparent. I see no principled reason in that article why CEA, OP, EA Funds, GWWC or any other regranters should expect so much more transparency than theyâre willing to offer themselves. Briefly going through their three key arguments:
âChallenge 1: protecting our brandâ - empirically I think this is something CEA and EV have substantially failed to do in the last few years. And in most of the major cases (continual failure for anyone to admit any responsibility for FTX; confusion around Wytham Abbeyâthe fact that that was âother CEAâ notwithstanding; PELTIV scores and other elitism-favouring policies; the community health team not disclosing allegations against Owen (or more politic-ly âa key member of our organisationâ) sooner; etc) this was explicitly bad feeling over lack of transparency. I think publishing somee half-baked explanations that summarised the actual thinking of these at the time (rather than when in response to them later being exposed by critics) would a) have given people far less to complain about, and b) possibly generated (kinder) pushback from the community that might have averted some of the problem as it eventually manifested. I have also argued that CEAâs historical media policy of âtalk as little as possible to the mediaâ both left a void in media discussion of the movement that was filled by the most vociferous critics and generally worsened the epistemics of the movement.
âChallenge 2: information about us is information about granteesâ - this mostly doesnât apply to CEA. Your grantees are the community and community orgs, both groups of whom would almost certainly like more info from you. (it also does apply to nonmeta charities like Givedirectly, who we nonetheless expect to gather large amounts of info on the community theyâre servingâbut in that situation we think itâs a good tradeoff)
âChallenge 3: transparency is unusualâ - this seems more like a whinge than a real objection. Yes, itâs a higher standard than the average nonprofit holds itself to. The whole point of the EA movement was to encourage higher standards in the world. If we canât hold ourselves to those raised standards, itâs hard to have much hope that weâll ever inspire meaningful change in others.
> I also think itâs possible to have impartiality without scope sensitivity. Animal shelters and animal sanctuaries strike me as efforts that reflect impartiality insofar as they value the wellbeing of a wide array of species, but they donât try to account for scope sensitivity
This may be quibbling, but I would consider focusing on visible subsets of the animal population (esp pets) a form of partiality. This particular disagreement doesnât matter much, but it illustrates why I think gestures towards principles that are really not that well defined is that helpful for giving a sense of what we can expect CEA to do in future.
> âWhile we often strive to collaborate and to support people in their engagement with EA, our primary goal is having a positive impact on the world, not satisfying community members (though oftentimes the two are intertwined).â
I think this politicianspeak. If AMF said âour primary goal is having a positive impact on the world rather than distributing bednetsâ and used that as a rationale to remove their hyperfocus on bednets, Iâm confident a) that they would become much less positive on the world, and b) that Givewell would stop recommending them for that reason. Taking a risk on choosing your focus and core competencies is essential to actually doing something usefulâif you later find out that your core competencies arenât that valuable then you can either disband the organisation, or attempt a radical pivot (as Charity Scienceâs founders did on multiple occasions!).
> I think this was particularly true during the FTX boom times, when significant amounts of money were spent in ways that, to my eyes, blurred the lines between helping the community do more good and just plain helping the community. See e.g. these posts for some historical discussion ⊠We have made decisions that may make our events less of a pleasant experience (e.g. cutting back on meals and snack variety)
I think this along with the transparency question is our biggest disagreement and/âor misunderstanding. Thereâs a major equivocation going on here between exactly *which* members of the community youâre serving. I am entirely in favour of cutting costs at EAGs (the free wine at one I went to tasted distinctly of dead children), and of reducing all-expenses-paid forums for âpeople leading EA community-buildingâ. I want to see CEA support people who actually need support to do goodâthe low-level community builders with little to no career development, esp in low or middle income countries whose communities are being starved; the small organisations with good track records but such mercurialfunding; all the talented people who didnât go to top 100 universities and therefore get systemically deprioritised by CEA. These people were never major beneficiaries of the boom, but were given false expectations during it and have been struggling in the general pullback ever since.
> For example, for events, our primary focus is on metrics like how many positive career changes occur as a result of our events, as opposed to attendee satisfaction.
I think the focus would be better placed on why attendees are satisfied or dissatisfied. If I go to an event and feel motivated to work harder in what Iâm already doing, or build a social network who make me feel better enough about my life that I counterfactually make or keep a pledge, these things are equally as important. Thereâs something very patriarchal about CEA assuming they know better what makes members of the community more effective than the members of the community do. And, as any metric, âpositive career changesâ can be gamed, or could just be the wrong thing to focus on.
> I think if anyone was best able to make a claim to be our customers, it would be our donors. Accountability to the intent behind their donations does drive our decision-making, as I discussed in the OP.
If both CEA and its donors are effectiveness-minded, this shouldnât really be a distinctionâper my comments about focus above, serving CEAâs community is about the most effective thing an org with a community focus can do, and so one would hope the donors would favour it. But also, this argument would be stronger if CEA only took money from major donors. As is, as long as CEA accepts donations from the community, sometimes actively solicits it, and broadly requires it (subject to honesty policy) from people attending EAGsâthen your donors are the community and hence, either way, your customers.
(I work on the Forum but I am only speaking for myself.)
To respond to some bits related to the Forum:
In the actual world, the community doesnât really know⊠why some posts get tagged as âcommunityâ on the forum, and therefore effectively suppressed while similar ones stay at the top level
If youâre referring to âwhyâ as in, what criteria is used for determining when to tag a post as âCommunityâ, that is listed in the Community topic page. If youâre referring to âwhyâ as in, how does that judgement happen, this is done by either the post author or a Forum Facilitator (as described here).
In the actual world, the community doesnât really know⊠why the âcommunityâ tag has been made admin-editable-only
We provided a brief explanation in this Forum update post. The gist is that we would like to prevent misuse (i.e. people applying it to posts because they wanted to move them down, or people removing it from posts because they wanted to move them up).
Thank you for flagging your interest in this information! In general we donât publicly post about every small technical change we make on the Forum, as itâs hard to know what people are interested in reading about. If you have additional questions about the Forum, please feel free to contact us.
In general, our codebase is open source so youâre welcome to look at our PRs descriptions. Itâs true that those can be sparse sometimes â feel free to comment on the PR if you have questions about it.
we donât have any regular venue for being able to discuss such questions and community-facing CEA policies and metrics with some non-negligible chance of CEA respondingâa simple weekly office hours policy could fix this.
If you have questions for the Forum team, youâre welcome to contact us at any time. I know that we have not been perfect at responding but we do care about being responsive and do try to improve. You can DM me directly if you donât get a response; I am happy to answer questions about the Forum. I also attend multiple EAG(x) conferences each year and am generally easy to talk to thereâI take a shift at the CEA org fair booth (if I am not too busy volunteering), and fill my 1:1 slots with user interviews asking people for feedback on the Forum. I think most people are excited for others to show an interest in their work, and that applies to me as well! :)
> âWhile we often strive to collaborate and to support people in their engagement with EA, our primary goal is having a positive impact on the world, not satisfying community members (though oftentimes the two are intertwined).â
I think this politicianspeak. If AMF said âour primary goal is having a positive impact on the world rather than distributing bednetsâ and used that as a rationale to remove their hyperfocus on bednets, Iâm confident a) that they would become much less positive on the world, and b) that Givewell would stop recommending them for that reason. Taking a risk on choosing your focus and core competencies is essential to actually doing something usefulâif you later find out that your core competencies arenât that valuable then you can either disband the organisation, or attempt a radical pivot (as Charity Scienceâs founders did on multiple occasions!).
I personally disagree that it would be better for CEA to have a goal that includes a specific solution to their overarching goal. I think it is often the case that itâs better to focus on outcomes rather than specific solutions. In the specific case of the Forum team, having an overarching goal that is about having a positive impact means that we feel free to do work that is unrelated to the Forum if we believe that it will be impactful. This can take the shape of, for example, a month-long technical project for another organization that has no tech team. I think if our goal were more like âhave a positive impact by improving the EA Forumâ that would be severely limiting.
I also personally disagree that this is âpoliticianspeakâ, in the sense that I believe the quoted text is accurate, will help you predict our future actions, and highlights a meaningful distinction. Iâll refer back to an example from my other long comment: when we released the big Forum redesign, the feedback from the community was mostly negative, and yet I believe it was the right thing to do from an impact perspective (as it gave the site a better UX for new users). I think there are very few examples of us making a change to the Forum that the community overall disagrees with, but I think it is both more accurate for us to say that âour primary goal is having a positive impact on the worldâ, and better for the world that that is our primary goal (rather than âcommunity satisfactionâ).
Thereâs a distinction between what an organization wants to achieve and how it wants to achieve it. The principles described in the original post are related to the what. They help us identify a set of shared beliefs that define the community we want to cultivate.
I think thereâs plenty of room for disagreement and variation over how we cultivate that community. Even as CEAâs mission remains the same, I expect the approach weâll use to achieve that mission will vary. Itâs possible to remain committed to these principles while also continuing to find ways to improve CEAâs effectiveness.
I view transparency as part of the how, i.e. I believe transparency can be a tool to achieve goals informed by EA principles, but I donât think itâs a goal in itself. Looking at the spectrum of approaches EA organizations take to doing good, Iâm glad that thereâs room in our community for a diversity of approaches. I think transparency is a good example of a value where organizations can and should commit to it at different levels to achieve goals inspired by EA principles, and as a result I donât think itâs a principle that defines the community.
For example, I think itâs highly valuable for GiveWell to have a commitment to transparency in order for them to be able to raise funds and increase trust in their charity evaluations, but I think transparency may cause active harm for impactful projects involving private political negotiations or infohazards in biosecurity. Transparency is also not costless, e.g. Open Philanthropy has repeatedly published pieces on the challenges of transparency. I think itâs reasonable for different individuals and organizations in the EA community to have different standards for transparency, and Iâm happy for CEA to support others in their approach to doing good at a variety of points along that transparency spectrum.
When it comes to CEA, I think CEA would ideally be more transparent and communicating with the community more, though I also donât think it makes sense for us to have a universal commitment to transparency such that I would elevate it to a âcore principle.â I think different parts of our work deserve different levels of transparency. For example:
I think CEA should communicate about programmatic goals, impacts, and major decisions, which weâve done before (see e.g. here)âbut I think we would ideally be doing more.
On the other end of the spectrum, there are some places where confidentiality seems like an obvious good to me, e.g. with some information that is shared with our Community Health Team. I donât expect this will be a novel idea for most readers, but I think itâs useful to illustrate that even for CEA, transparency isnât an unabated good.
Somewhere in between is something like the EAG admissions bar. We do share significant amounts of information about admissions, but as Amy Labenz (our Head of Events) has stated, we want to avoid situations where we share so much information that people can use it to game the admissions process. I think itâs worth us potentially investing more in similar meta-transparency around where we will and wonât expect to share information. I suspect the lack of total transparency will upset some members of the community (particularly those who arenât admitted to our events), but I think the tradeoffs are plausibly worth it.
I feel quite strongly that these principles go beyond applause lights and are substantively important to EA. Instead of going into depth on all of the principles, Iâll point out that many others have spent effort articulating the principles and their value, e.g. here, here, and here.
To briefly engage with some of the points in your comment and explain how I see these principles holding value:
Impartiality and scope sensitivity can exist independently of each other. Many contemporary approaches to philanthropy are highly data-driven and seek to have more impact, but they arenât impartial with respect to their beneficiaries. As an example, the Gates Foundationâs US education program strikes me as an approach that is likely to be scope-sensitive without being impartial. Theyâre highly data-driven and want to improve US education as much as possible, but it seems likely to me that their focus on the US education as opposed to e.g. educational programs in Nigeria stems from Gates being in the US rather than an impartial consideration of all potential beneficiaries of their philanthropy.
I also think itâs possible to have impartiality without scope sensitivity. Animal shelters and animal sanctuaries strike me as efforts that reflect impartiality insofar as they value the wellbeing of a wide array of species, but they donât try to account for scope sensitivity (e.g. corporate campaigns are likely to improve the lives of orders of magnitude more animals per dollar).
I agree that a scout mindset and recognition of tradeoffs are important tools for doing counterfactually large amounts of good. I also think theyâre still wildly underutilized by the rest of the world. Stefan Schubertâs claim that the triviality objection is beside the point resonates with me. The goal of these principles isnât to be surprising, but rather to be action-guiding and effective at inspiring us to better help others.
I think itâs important to view the quote from the original post in the context of the following sentence: âWhile we often strive to collaborate and to support people in their engagement with EA, our primary goal is having a positive impact on the world, not satisfying community members (though oftentimes the two are intertwined).â I believe the goals of engaged community members and CEA are very frequently aligned, because I believe most community members strive to have a positive impact on the world. With that being said, if and when having a positive impact on the world and satisfying community members does come apart, we want to keep our focus on the broader mission.
I worry some from the comments in response to this post that people are concerned we wonât listen to or communicate with the community. My take is that as âteammates,â we actually want to listen quite closely to the community and have a two-way dialogue on how we can achieve these goals. With that being said, based on the confusion in the comments, I think it may be worth putting the analogy around âteammatesâ and âcustomersâ aside for the moment. Instead, let me say some concrete things about how CEA approaches engagement with the community:
I believe the majority of CEAâs impact flows through the community. In recent years, our decision making has placed the most emphasis on metrics around the number of positive career changes people have made as a result of our programs. We think the community has valuable input to give us on how we can help them help others, and we use their input to drive decisions. We frequently solicit feedback for this purpose, e.g. via our recent forum survey, or the surveys we run after most of our events.
The ultimate beneficiaries of our work are groups like children who would otherwise die from malaria, chickens who would otherwise suffer in cages, and people who might die or not exist due to existential catastrophes. I think these are populations that the vast majority of the EA community is concerned about as well. I see us as collaborating to achieve these goals, and I think CEA is best poised to achieve them by empowering people who share core EA principles.
While I think most people in EA would agree with the above goals, I do think at times that meta organizations have erred too far in the direction of trying to optimize for community satisfaction. I think this was particularly true during the FTX boom times, when significant amounts of money were spent in ways that, to my eyes, blurred the lines between helping the community do more good and just plain helping the community. See e.g. these posts for some historical discussion.
Concretely, this affects how we evaluate CEAâs impact. For example, for events, our primary focus is on metrics like how many positive career changes occur as a result of our events, as opposed to attendee satisfaction. We do collect data on the latter and treat it as a useful input for our decision-making. Among other reasons, we believe itâs helpful because we think one of the things that satisfies many community members is when we help them improve their impact! But itâs an input, not the thing weâre optimizing for. We have made decisions that may make our events less of a pleasant experience (e.g. cutting back on meals and snack variety) but ultimately think that we can use these funds better elsewhere or that our donors can instead not give to CEA and redirect the funding to beneficiaries that both they and we care about.
Sometimes, approaches to serving different parts of the community are in tension with each other. To return to EAG admissions, I think Eli Nathan does a good job in this comment discussing how we both incorporate stakeholder feedback but donât optimize for making the community happy. Sometimes we have to make tough decisions on tradeoffs between how we support different parts of the community, and weâll use a mix of community input and our own judgment when doing so.
I think if anyone was best able to make a claim to be our customers, it would be our donors. Accountability to the intent behind their donations does drive our decision-making, as I discussed in the OP. I think itâs also important to note that I donât perceive this to be a change from CEAâs historical practices (if anything, I think this dynamic has become less pronounced with recent changes at Open Philanthropy and CEA, although I still am very unsure how it will shake out in the long run).
I still want us to invest more in communicating with the community. I suspect you and I have different takes on what we feel like the optimal level of communication and transparency is, but I do agree that CEA should directionally be communicating more. Our main bottleneck to doing so right now is bandwidth, not desire. (Weâre exploring ways to reduce that bottleneck but donât want to make promises.) I think itâs a good thing when we engage more, and Iâm supportive of efforts from our team to do so, whether thatâs through proactive posts from us or engaging with community critiques. The desire to be transparent was one of the original inspirations for doing this principles-first post.
I think the principles-first approach is good at recognizing the diversity of perspectives in our community and supporting individual community members in their own journey to do good. We regularly have forum posts, event attendees and speakers, and group members whose cause prioritization reflects choices I disagree with. I think thatâs good!
I understand the primary concern posed in this comment to be more about balancing the views of donors, staff, and the community about having a positive impact on the world, rather than trading off between altruism and community self-interest. To my ears, some phrases in the following discussion make it sound like the communityâs concerns are primarily self-interested: âtrying to optimize for community satisfaction,â âjust plain helping the community,â âmake our events less of a pleasant experience (e.g. cutting back on meals and snack variety),â âdonât optimize for making the community happyâ for EAG admissions).
I donât doubt that yâall get a fair number of seemingly self-interested complaints from not-satisfied community members, of course! But I think modeling the communityâs concerns here as self-interested would be closer to a strawman than a steelman approach.
CEA receives many fewer resources from its donors than from the community. Again, CEA would not really have a job without the community. An organization like CEA would totally exist without your big donors (like, the basic institution of having an âEA leadership organizationâ requires a few hundred k per year, which you would be able to easily fundraise from a very small fraction of the community, and even at the current CEA burn-rate the labor-value of the people who are substantially directing their life based on the broader EA community vastly eclipses the donations to CEA).
Your donors seem obviously much less important of a stakeholder than the community which is investing you with the authority to lead.
Hi Zachary,
First off, I want to thank you for taking what was obviously a substantial amount of time to reply (and also to Sarah in another comment that I havenât had time to reply to). This is, fwiw, is already well above the level of community engagement that Iâve perceived from most previous heads of CEA.
On your specific comments, itâs possible that we agree more than I expected. Nonetheless, there are still some substantial concerns they raise for me. In typical Crocker-y fashion, I hope youâll appreciate that me focusing on the disagreements for the rest of this comment doesnât imply that theyâre my entire impression. Should you think about replying to this, know that I appreciate your time, and I hope you feel able to reply to individual points without being morally compelled to respond to the whole thing. Iâm giving my concerns here as much for your and the communityâs information as with the hope of a further response.
> I view transparency as part of the how, i.e. I believe transparency can be a tool to achieve goals informed by EA principles, but I donât think itâs a goal in itself.
In some sense this is obviously true, but I believe itâs gerrymandering what the difference between âwhatâ and âhowâ actually is.
For example, to my mind âscout mindsetâ doesnât seem any more central a goal than âbe transparentâ. In the post by Peter you linked, his definition of it sounds remarkably like âbe transparentâ, to wit: âthe view that we should be open, collaborative, and truth-seeking in our understanding of what to doâ.
One can imagine a world where we should rationally stop exploring new ideas and just make the best of the information we have (this isnât so hard to imagine if itâs understood as a temporary measure to firefight urgent siutations), and where major charities can make substantial decisions without explanation and this tend to produce trustworthy and trusted policiesâbut I donât think we live in either world most of the time.
In the actual world, the community doesnât really know, for example with what weighting CEA priorities longtermist causes over others; how it priorities AI vs other longtermist causes, how it runs admissions at EAGs,;why some posts get tagged as âcommunityâ on the forum, and therefore effectively suppressed while similar ones stay at the top level; why the âcommunityâ tag has been made admin-editable-only; what the region pro rata rates CEA uses when contracting externally; what your funding breakdown looks like (or even the absolute amount); what the inclusion criteria for âleadershipâ forums is, or who the attendees are; or many many other such questions people in the community have urgently raised. And we donât have any regular venue for being able to discuss such questions and community-facing CEA policies and metrics with some non-negligible chance of CEA respondingâa simple weekly office hours policy could fix this.
> confidentiality seems like an obvious good to me, e.g. with some information that is shared with our Community Health Team
Confidentiality is largely unrelated to transparency. If in any context someone speaks to someone else in confidence, there have to be exceptionally good reasons for breaking that confidence. None of what Iâm pointing at in the previous paragraph would come close to asking them to do that.
> Amy Labenz (our Head of Events) has stated, we want to avoid situations where we share so much information that people can use it to game the admissions process.
I think this statement was part of the problem⊠We as a community have no information on which to evaluate the statement, and no particular reason to take it at face value. Are there concrete examples of people gaming the system this way? Is there empirical data showing some patterns that justify this assertion (and comparing it to the upsides)? I know experienced EA event organisers who explicitly claim sheâs wrong on this. As presented, Labenzâs statement is in itself a further example of lack of transparency that seems not to serve the communityâitâs a proclamation from above, with no follow-up, on a topic that the EA community would actively like to help out with if we were given sufficient data.
This raises a more general pointâtransparency doesnât just allow the community to criticise CEA, but enables individuals and other orgs to actively help find useful info in the data that CEA otherwise wouldnât have had the bandwidth to uncover.
> I think transparency may cause active harm for impactful projects involving private political negotiations or infohazards in biosecurity
These scenarios get wheeled out repeatedly for this sort of discussion (Chris Leong basically used the same ones elsewhere in this thread), but I find them somewhat disingenuous. For most charities, including all core-to-the-community EA charities, this is not a concern. I certainly hope CEA doesnât deal in biosecurity or international politicsâif it does, then the lack of transparency is much worse than I thought!
> Transparency is also not costless, e.g. Open Philanthropy has repeatedly published pieces on the challenges of transparency
All of the concerns they list there apply equally to all the charities that Givewell, EAFunds etc expect to be transparent. I see no principled reason in that article why CEA, OP, EA Funds, GWWC or any other regranters should expect so much more transparency than theyâre willing to offer themselves. Briefly going through their three key arguments:
âChallenge 1: protecting our brandâ - empirically I think this is something CEA and EV have substantially failed to do in the last few years. And in most of the major cases (continual failure for anyone to admit any responsibility for FTX; confusion around Wytham Abbeyâthe fact that that was âother CEAâ notwithstanding; PELTIV scores and other elitism-favouring policies; the community health team not disclosing allegations against Owen (or more politic-ly âa key member of our organisationâ) sooner; etc) this was explicitly bad feeling over lack of transparency. I think publishing somee half-baked explanations that summarised the actual thinking of these at the time (rather than when in response to them later being exposed by critics) would a) have given people far less to complain about, and b) possibly generated (kinder) pushback from the community that might have averted some of the problem as it eventually manifested. I have also argued that CEAâs historical media policy of âtalk as little as possible to the mediaâ both left a void in media discussion of the movement that was filled by the most vociferous critics and generally worsened the epistemics of the movement.
âChallenge 2: information about us is information about granteesâ - this mostly doesnât apply to CEA. Your grantees are the community and community orgs, both groups of whom would almost certainly like more info from you. (it also does apply to nonmeta charities like Givedirectly, who we nonetheless expect to gather large amounts of info on the community theyâre servingâbut in that situation we think itâs a good tradeoff)
âChallenge 3: transparency is unusualâ - this seems more like a whinge than a real objection. Yes, itâs a higher standard than the average nonprofit holds itself to. The whole point of the EA movement was to encourage higher standards in the world. If we canât hold ourselves to those raised standards, itâs hard to have much hope that weâll ever inspire meaningful change in others.
> I also think itâs possible to have impartiality without scope sensitivity. Animal shelters and animal sanctuaries strike me as efforts that reflect impartiality insofar as they value the wellbeing of a wide array of species, but they donât try to account for scope sensitivity
This may be quibbling, but I would consider focusing on visible subsets of the animal population (esp pets) a form of partiality. This particular disagreement doesnât matter much, but it illustrates why I think gestures towards principles that are really not that well defined is that helpful for giving a sense of what we can expect CEA to do in future.
> âWhile we often strive to collaborate and to support people in their engagement with EA, our primary goal is having a positive impact on the world, not satisfying community members (though oftentimes the two are intertwined).â
I think this politicianspeak. If AMF said âour primary goal is having a positive impact on the world rather than distributing bednetsâ and used that as a rationale to remove their hyperfocus on bednets, Iâm confident a) that they would become much less positive on the world, and b) that Givewell would stop recommending them for that reason. Taking a risk on choosing your focus and core competencies is essential to actually doing something usefulâif you later find out that your core competencies arenât that valuable then you can either disband the organisation, or attempt a radical pivot (as Charity Scienceâs founders did on multiple occasions!).
> I think this was particularly true during the FTX boom times, when significant amounts of money were spent in ways that, to my eyes, blurred the lines between helping the community do more good and just plain helping the community. See e.g. these posts for some historical discussion ⊠We have made decisions that may make our events less of a pleasant experience (e.g. cutting back on meals and snack variety)
I think this along with the transparency question is our biggest disagreement and/âor misunderstanding. Thereâs a major equivocation going on here between exactly *which* members of the community youâre serving. I am entirely in favour of cutting costs at EAGs (the free wine at one I went to tasted distinctly of dead children), and of reducing all-expenses-paid forums for âpeople leading EA community-buildingâ. I want to see CEA support people who actually need support to do goodâthe low-level community builders with little to no career development, esp in low or middle income countries whose communities are being starved; the small organisations with good track records but such mercurial funding; all the talented people who didnât go to top 100 universities and therefore get systemically deprioritised by CEA. These people were never major beneficiaries of the boom, but were given false expectations during it and have been struggling in the general pullback ever since.
> For example, for events, our primary focus is on metrics like how many positive career changes occur as a result of our events, as opposed to attendee satisfaction.
I think the focus would be better placed on why attendees are satisfied or dissatisfied. If I go to an event and feel motivated to work harder in what Iâm already doing, or build a social network who make me feel better enough about my life that I counterfactually make or keep a pledge, these things are equally as important. Thereâs something very patriarchal about CEA assuming they know better what makes members of the community more effective than the members of the community do. And, as any metric, âpositive career changesâ can be gamed, or could just be the wrong thing to focus on.
> I think if anyone was best able to make a claim to be our customers, it would be our donors. Accountability to the intent behind their donations does drive our decision-making, as I discussed in the OP.
If both CEA and its donors are effectiveness-minded, this shouldnât really be a distinctionâper my comments about focus above, serving CEAâs community is about the most effective thing an org with a community focus can do, and so one would hope the donors would favour it. But also, this argument would be stronger if CEA only took money from major donors. As is, as long as CEA accepts donations from the community, sometimes actively solicits it, and broadly requires it (subject to honesty policy) from people attending EAGsâthen your donors are the community and hence, either way, your customers.
(I work on the Forum but I am only speaking for myself.)
To respond to some bits related to the Forum:
If youâre referring to âwhyâ as in, what criteria is used for determining when to tag a post as âCommunityâ, that is listed in the Community topic page. If youâre referring to âwhyâ as in, how does that judgement happen, this is done by either the post author or a Forum Facilitator (as described here).
We provided a brief explanation in this Forum update post. The gist is that we would like to prevent misuse (i.e. people applying it to posts because they wanted to move them down, or people removing it from posts because they wanted to move them up).
Thank you for flagging your interest in this information! In general we donât publicly post about every small technical change we make on the Forum, as itâs hard to know what people are interested in reading about. If you have additional questions about the Forum, please feel free to contact us.
In general, our codebase is open source so youâre welcome to look at our PRs descriptions. Itâs true that those can be sparse sometimes â feel free to comment on the PR if you have questions about it.
If you have questions for the Forum team, youâre welcome to contact us at any time. I know that we have not been perfect at responding but we do care about being responsive and do try to improve. You can DM me directly if you donât get a response; I am happy to answer questions about the Forum. I also attend multiple EAG(x) conferences each year and am generally easy to talk to thereâI take a shift at the CEA org fair booth (if I am not too busy volunteering), and fill my 1:1 slots with user interviews asking people for feedback on the Forum. I think most people are excited for others to show an interest in their work, and that applies to me as well! :)
I personally disagree that it would be better for CEA to have a goal that includes a specific solution to their overarching goal. I think it is often the case that itâs better to focus on outcomes rather than specific solutions. In the specific case of the Forum team, having an overarching goal that is about having a positive impact means that we feel free to do work that is unrelated to the Forum if we believe that it will be impactful. This can take the shape of, for example, a month-long technical project for another organization that has no tech team. I think if our goal were more like âhave a positive impact by improving the EA Forumâ that would be severely limiting.
I also personally disagree that this is âpoliticianspeakâ, in the sense that I believe the quoted text is accurate, will help you predict our future actions, and highlights a meaningful distinction. Iâll refer back to an example from my other long comment: when we released the big Forum redesign, the feedback from the community was mostly negative, and yet I believe it was the right thing to do from an impact perspective (as it gave the site a better UX for new users). I think there are very few examples of us making a change to the Forum that the community overall disagrees with, but I think it is both more accurate for us to say that âour primary goal is having a positive impact on the worldâ, and better for the world that that is our primary goal (rather than âcommunity satisfactionâ).