I wish I could be as positive as everyone else, but there are some yellow flags for me here.
Firstly, as Zachary said, these seem to be exactly the same principles CEA has stated for years. If nothing about them is changing, then it doesn’t give much reason to think that CEA will improve in areas it has been deficient to date. To quote probably-not-Albert-Einstein, ‘Insanity is doing the same thing over and over again and expecting different results.’
Secondly, I find the principles themselves quite handwavey, and more like applause lights than practical statements of intent. What does ‘recognition of tradeoffs’ involve doing? It sounds like something that will just happen rather than a principle one might apply. Isn’t ‘scope sensitivity’ basically a subset of the concerns implied by ‘impartiality’? Is something like ‘do a counterfactually large amount of good’ supposed to be implied by impartiality and scope sensitivity? If not, why is it not on the list? If so, why does ‘scout mindset’ need to be on the list, when ‘thinking through stuff carefully and scrupulously’ is a prerequisite to effective counterfactual actions? On reading this post, I’m genuinely confused about what any of this means in terms of practical expectations about CEA’s activities.
Thirdly, ‘I view the community as CEA’s team, not its customers’ sounds like a way of avoiding ever answering criticisms from the EA community, and really doesn’t gel with the actual focuses of CEA:
Most of the EA community doesn’t contribute to CEA in any material way
CEA lists its programs as ‘events, local groups, and an online forum’, and below that section, ‘community health’. All of these activities sound like services provided to the EA community. Admittedly the community largely doesn’t pay for these services, but that seems like a technicality trading on the fact that nonprofits don’t have literal customers. The community are effectively your beneficiaries—not in that you’re supposed to enrich us, but in that you’re supposed to empower us to work for/fundraise for or otherwise support charities. In the same way Givedirectly is and should be judged by how effectively they serve their beneficiaries (e.g. Africans below the poverty line), CEA should be judged by how effectively it serves its effective beneficiaries by empowering them to do those things.
Lastly, I really really wish ‘transparency’ would make the list again (am I crazy? I feel like it was on a CEA list in some form in the early days, and then was removed). I think there are multiple strong reasons for making transparency a core principle:
The movement was founded on Givewell/GWWC doing reviews of and ultimately promoting charities—reviews for which transparency is an absolute prerequisite for recommendation
It seems importantly hypocritical as a movement to demand it of evaluees but not to practice it at a meta level
Givewell themselves have been a model of transparency in their reasoning, value assumptions, etc, and not coincidentally one of the least criticised and most celebrated EA organisations.
Much of the sea of intra-EA criticism (including my own) that followed FTXgate and Wytham Abbeygate involved concerns about lack of transparency
If CEA do indeed view us as ‘[its] team, not its customers’, it’s hard for us to make useful decisions about how to contribute without knowing the rationale or context for their key decisions
I am very positive about the new batch of Effective Ventures trustees and the direction of independence CEA and other EV projects have taken, and I strongly hope that my concerns here turn out to be misplaced.
Note: I had drafted a longer comment before Arepo’s comment, given the overlap I cut parts that they already covered and posted the rest here rather than in a new thread.
...it also presupposes that CEA exists solely to serve the EA community. I view the community as CEA’s team, not its customers. While we often strive to collaborate and to support people in their engagement with EA, our primary goal is having a positive impact on the world, not satisfying community members
I agree with Arepo that both halves of this claim seem wrong. Four of CEA’s five programs, namely Groups, Events, Online, and Community Health, have theories of change that directly route through serving the community. This is often done by quite literally providing them with services that are free, discounted, or just hard to acquire elsewhere. Sure, they are serving the community in order to have a positive impact on the wider world, but that’s like saying a business provides a service in order to make a profit; true but irrelevant to the question of whether the directly-served party is a customer.
I speculate that what’s going on here is:
CEA doesn’t want to coordinate the community the way any leader or manager would be expected to coordinate their team. That (a) seems like a quick path to groupthink and (b) would be hard given many members do not recognise CEA’s authority.
CEA also doesn’t want to feel responsible for making community members happy, because it feels the eternal critics that make up the community (hi!) will be unhappy regardless of what it does.
I’m sympathetic to both impulses, but if taken too far they leave the CEA <-> EA community relationship at an impasse and make the name ‘CEA’ a real misnomer. Regardless of preferred language, I hope that CEA will rediscover its purpose of nurturing and supporting the EA community by providing valuable services to its members[1] - a lower bar than ‘make these eternal critics happy’ - and I believe the short descriptions of those four teams quoted below already clearly point in that direction.
For me, this makes the served members customers, in the same sense that a parishioner is a customer of their church. Most businesses can’t make all prospective customers happy either! But if that fact makes them forget that their continued existence is contingent upon their ability to serve customers, then they are truly lost.
Events: We run conferences like EA Global and support community-organized EAGx conferences...
Groups: We fund and advise hundreds of local effective altruism groups...
Community Health: We aim to prevent and address interpersonal and community problems that can prevent community members and projects from doing their best work.
But if CEA cannot or will not do this, I think it should change its name.
I feel kind of confused about the point you are making here. CEA is the Centre for Effective Altruism, not the Center for Effective Altruists. This is fairly different from many community building organizations; e.g. Berkeley Seniors’ mission is to help senior citizens in Berkeley per se (rather than advance some abstract idea which seniors residing in Berkeley happen to support).
I can’t tell if you
Disagree that CEA differs from many community building organizations in this way
Agree that it differs but disagree that it should
Agree that it differs but feel like this difference is small/pedantic and not worth highlighting
Agree that it differs but disagree that “customer vs. team” is a useful way to describe this difference
I am not AGB, but it’s clear that a huge fraction of the power that CEA has comes from it being perceived as a representative of the EA community, and because the community empowered it to solve coordination problems between its members. That power is given conditional on CEA acting on behalf of the people who invested that power.
Sure, maybe CEA accepted those resources (and the expectations that came with that) with the goal of doing the most good, but de-facto CEA as an institution basically only exists because of its endorsement by the EA community, and the post as written seems to me like it basically is denying that power relationship and responsibility.
Lightcone in its stewardship of LW is in a very similar position. Our goal with LW is to develop an art of rationality and reduce existential risk, but as an institution we are definitely also responsible for optimizing for the goals of the other stakeholders who have invested in LessWrong (like the authors, commenters, Eliezer who founded the site, and the broader rationality community which has invested in LessWrong as a kind of town square). People would be really pissed if we banned long-term contributors to LW, even if we thought it was best by our own lights, and rightfully so. They have invested resources which make them a legitimate stakeholder in the commons that we are administering.
(there is some degree to which we do have leeway here because there is widespread buy-in for something like “Well Kept Gardens Die by Pacifism”, but that leeway comes from the fact that there is widespread buy-in for discretion-based moderation, and that buy-in does not exist for all forms of possible changes to LW)
Thanks! For what it’s worth, the thing you are describing seems consistent with describing EAs as “teammates” (I also think that sports teams are successful ~entirely because of the work of their constituent team members) but I concede that the term is vague.
[Edit: further explained and qualified in a new comment below.]
Agreed, although I would note that the application varies from function to function.
For instance, I don’t think it runs EAGs or funds EAGxes through power granted by the community. So I think CEA has considerably more room to do what it thinks best by its own lights when dealing with its events than in (e.g.,) operating the community health team.
I would put other core community infrastructure in a similar bucket as community health, at least to the extent it constitutes a function where coordination of effort is an important factor and CEA can be seen as occupying the field. For example, it makes sense to coordinate a single main Forum, a single sponsor of university groups at a particular university, etc.
Huh, EAG feels like one of the most obvious community-institutions. Like, it’s the central in-person gathering event of the EA community, and it’s exactly the kind of thing where you want to empower an organization to run a centrally controlled version of it, because having a Schelling-event is very valuable.
But of course, in empowering someone to do that, CEA accepts some substantial responsibility to organize the event with the preferences of the community in mind. Like, EAG is really hard to organize if you are not in an “official EA-representative” position, and a huge fraction of the complexity comes from managing that representation.
I could have been clearer that different CEA functions are on a continuum in their relationship with the community, rather than sounding more binary at points. Also, my view that CEA has more freedom around EAGs than certain other functions doesn’t mean I would assign no meaningful constraints.
That being said, I think the “desirability of empowering an organization to run a centrally controlled” function is probably necessary but not sufficient to rely on the community-empowerment narrative. Here, there are various factors that pull me toward finding a weaker obligation on CEA’s part—the obligation not to unfairly or inappropriately appropriate for its own objectives the assembling of many EAs in one city at one time in a way that deprives other actors of their opportunity to make a play for that external/community resource. In other words, I see a minimum duty to manage that resource in an interoperable and cooperative manner . . . but generally not a duty to allocate CEA’s own resources and programming decisions in a way that lines up with community preferences.
I don’t think there is anything that prevents an organization from running a conference, even a top-notch conference, by its own lights and without necessarily surrendering a significant amount of control to the community. One plausible narrative here is that CEA put on a top-notch conference that others couldn’t or didn’t match (backing from Open Phil and formerly FTXFF doubtless would help!) and that the centralizing elements are roughly the natural result of what happens when you put on a conference that is much better than the alternatives. In this narrative, there would be no implied deal that makes CEA largely the agent of the community in running EAG.
That strikes me as at least equally plausible on its face than one than a narrative in which the community “empower[ed]” CEA to run a conference with centralizing tendencies as long as the community retained sizable influence regarding how it is run. And given my desire to incentivize orgs to organize (and funders to fund) top-notch conferences, as well as a default toward the proper response to a conference you don’t like being organizing your own, I am inclined to make the natural-result narrative my starting point .
At the same time, I recognize the coordination work associated with EAGs—although I would specifically emphasize the coordination value of having a bunch of EAs in about the same place at the same time away from their day jobs. To me, that’s the main resource that is necessarily shared, in the sense of being something that can by its nature only happen 2-3 times per year, and is of community origin (rather than a CEA resource). I would take a fairly hard line against CEA actions that I judged to be an unfair or inappropriate grab at that resource. So while I would not impose the same duty you imply, I would assign a choice for CEA between that duty and a duty to run EAGs in an interoperable and cooperative manner.
Under that alternate duty, I would expect CEA to play nice with people and orgs who want to plan their own speakers and events that happen during the days of (or just before/after) the EAG. I would also expect CEA to take reasonable efforts to present its attendees with an option to opt-in to Swapcard with people who are not EAG attendees but are attending one of the other, non-CEA events. Failing to do these kinds of things would constitute a misuse of CEA’s dominant position that deprives other would-be actors the ability to tap into the collective community resource of co-location in space and time, and deprives the individual community members of free choice.
On the other hand, the alternative duty would not generally extend to deferring to the community’s preference on cause-area coverage for functions organized by CEA. Or to CEA’s decisions about who to provide travel grants or admittance to its own events. CEA choosing to de-emphasize cause area X in its own event planning, or employ a higher bar for travel grants for people working in cause area X, does not logically preclude the community from doing these things itself. To the extent the community finds it difficult to perform these functions (or delegate another org to do so), that would update me toward the natural-result narrative and away from viewing CEA as a delegate who primarily exercises the community’s power.
In contrast, my implied model for university groups is that the maximum healthy carrying capacity is usually one group per university due to a limited resource (student interest/attention) that is independent of CEA or any other org. Interoperability or co-existence is impractical, as the expected result would be failure of both/all groups from stretching the resource too thin. Moreover, starting a university group is within the operational capabilities of a number of actors (most non-EA student groups do not receive much in the way of external support, so the barriers to entry are pretty low). This raises the need for coordination among numerous potential actors. Under those circumstances, the empowerment/cooperation narrative is pretty convincing.
And many of the reasons I’m relatively more inclined to give CEA a freer hand on EAGs are lacking with the Forum. There are reasons a variety of conferences would be desirable (even if you want a single flagship conference), while the positive side of the ledger for multiple fora is more marginal. The speech on the Forum isn’t CEA’s own, so I’m not much less worried that expectations of community control of fora would reduce CEA’s incentives and ability to speak its own message. The examples of topics on which I would defer to CEA’s ability to use its own resources to pursue its own mission don’t have good analogues in the Forum context. There are many actors who could pull off running a central forum—the LW code could be forked, servers are fairly cheap, and the moderation lift would be manageable for a relatively small group of volunteers.
A thing you might not know is that I was on the founding team of the EA Global series (and was in charge of EA Global for roughly the first two years of its existence). This of course doesn’t mean I am right in my analysis here, but it does mean that I have a lot of detailed knowledge about the kind of community negotiations that were going on at the time.
I agree with a bunch of the arguments you made, but my sense is that when creating EA Global, CEA leaned heavily on its coordinating role within the community (which I think made sense).
Indeed, CEA took over the EA Summit from Leverage explicitly because both parties thought it was pretty important to have a centralized annual EA conference.
I didn’t know that, and adding in historical facts could definitely move me away from my starting point! For example, they could easily update me more toward thinking that (1) CEA would need to more explicitly disclaim intent to run the semi-official coordinating event, (2) it would need to provide some advance notice and a phase-out to allow other actors to stand up their own conferences that sought to fulfill a centralizing function; and (3) it would have a broader affirmative obligation to cooperate with any actor that wanted to stand up an alternative to EAG.
That’s fair, I didn’t really explain that footnote. Note the original point was in the context of cause prioritisation, and I should probably have linked to this previous comment from Jason which captured my feeling as well:
A name change would be a good start.
By analogy, suppose there were a Center for Medical Studies that was funded ~80% by a group interested in just cardiology. Influenced by the resultant incentives, the CMS hires a bunch of cardiologists, pushes medical students toward cardiology residencies, and devotes an entire instance of its flagship Medical Research Global conference to the exclusive study of topics in cardiology. All those things are fine, but this org shouldn’t use a name that implies that it takes a more general and balanced perspective on the field of medical studies, and should make very very clear that it doesn’t speak for the medical community as a whole.
It seems possible, though far from obvious, that CEA’s funding base is so narrow it’s forced to focus on that target, in order to ensure the organisation’s survival from that direction. This was something I thought Zach covered nicely:
The reality is that the majority of our funding comes from Open Philanthropy’s Global Catastrophic Risks Capacity Building Team, which focuses primarily on risks from emerging technologies. While I don’t think it’s necessary for us to share the exact same priorities as our funders, I do feel there are some constraints based on donor intent, e.g. I would likely feel it is wrong for us to use the GCRCB team’s resources to focus on a conference that is purely about animal welfare. There are also practical constraints insofar as we need to demonstrate progress on the metrics our funders care about if we want to be able to successfully secure more funding in the future.
While you raise a worthwhile point in that it probably would have been slightly better for this post to have a paragraph on ethical side constraints, I feel that the rest of this post is quite misguided (and that some points are likely due to an incomplete understanding of the top-level post)
If nothing about them is changing, then it doesn’t give much reason to think that CEA will improve in areas it has been deficient to date. To quote probably-not-Albert-Einstein, ‘Insanity is doing the same thing over and over again and expecting different results.’
CEA (and the EA movement as a whole) has been lacking in direction ever since Max stood down.
Having a clearly stated direction is an improvement in and of itself. It improves coordination and allows people to provide feedback on the direction of the community.
The shift in direction is that CEA is shifting further towards finding people who are (or could be) deeply committed to these principles and helping them deepen their understanding of them vs. shoveling as many people towards particular high-priority cause areas as possible.
I find the principles themselves quite handwavey, and more like applause lights
The concretization of these principles is laid out in much more detail in resources that both of us are familiar with. There is no need for Zachary to have gone into more detail here because it is going the other way and pulling out general principles from specific discussion, norms and practices within the community.
‘I view the community as CEA’s team, not its customers’ sounds like a way of avoiding ever answering criticisms from the EA community
The mission is obviously more important than us. That should be uncontroversial.
I suspect that more EA’s should dedicate their efforts to improving the health of the community and that this would increase the overall impact, but at the end of the day, the mission should come first[1].
In any case, counting up the number of activities CEA runs that achieve impact indirectly through the community is not particularly relevant to answering the question of whether CEA’s first duty is to the mission or the community.
Lastly, I really really wish ‘transparency’ would make the list again
It would have made sense for there to be a bit more discussion about ethical side-constraints, but including transparency in the list of core principles would honestly be just weird because transparency isn’t distinctly EA. Beyond that, the importance of transparency is significantly complicated by the concept of infohazards in areas like biohazards or AI safety. I really don’t see it as CEA’s role to take a side in these debates. I think it makes sense for CEA to embrace transparency as a key organisational value, but it’s not a core principle of EA in general and we should accept that different orgs will occupy different positions on the spectrum.
some points are likely due to an incomplete understanding of the top-level post
I’m not sure if you mean this question to be covered in the rest of your reply? If not, could you say concretely what you think I misunderstood? If so, I respectfully disagree that I misunderstood it:
The concretization of these principles is laid out in much more detail in resources that both of us are familiar with. There is no need for Zachary to have gone into more detail here
Maybe I’m less familiar with the resources than you think? I know huge amounts have been written on these notions, but I know of nothing that would fix my problem of ‘I don’t see how stating these principles gives me any meaningful information about CEA’s future behaviour’.
The mission is obviously more important than us. That should be uncontroversial.
I think that’s entirely consistent with what I’ve said. An organisation that aims to effect Y via X cannot afford to relegate X to an afterthought, or largely ignore the views of people strongly involved with X.
the importance of transparency is significantly complicated by the concept of infohazards in areas like biohazards or AI safety
I’m concerned that ‘infohazards’ get invoked far too often, especially to deflect concerns about (non)transparency. In CEA’s case in particular, it doesn’t seem like they deal with biohazards or AI safety at a level necessitating high security, and even if they do have some kind of black ops program dealing with those things that they’re not telling us about, that isn’t the transparency I’m concerned about. Just a general commitment to sharing info guiding key decisions about the community with the community, such as
sharing forum changes they’re considering, and the case for/against them
making all their hires open, or giving clear reasons why when they don’t
describing what their prioritisation process actually is, inasmuch as it can be formalised, between e.g. longtermism and animal welfare/other considerations
when they’re considering buying mansions in the Oxford countryside/other controversial multimillion dollar calculations, publishing the cost-benefit calculation rather than merely asserting its existence
giving the breakdown of their funding sources
publishing the breakdown of their budget for EAGs
open-sourcing forum data (IIRC they might technically have done this? But with no documentation, and an API that you have to direct-link to)
avoiding behind-the-scenes work that they’d be embarrassed to have publicised (e.g. PELTIV scores)
generally cultivating a culture of directly engaging in discussion with the community more—eg. regular office hours, rather than highly intermittent AMAs, and in threads like this, sticking around for a discussion rather than posting a top-down announcement and then entirely ignoring the comments.
I work on the Forum team, but this comment only represents my personal views and not those of CEA. Also, I am responding to this comment in particular because it mentions the Forum by name. I may respond to other comments if I have time but no promises.
First off, I want to say thank you for your comment. I think the Forum serves as an important space for organizations to get feedback from the community and I’m happy that it’s doing so here. I will also say that I think writing clearly is hard, and I am not a particularly good writer, so I am happy to clarify if anything I say is unclear.
‘I view the community as CEA’s team, not its customers’ sounds like a way of avoiding ever answering criticisms from the EA community, and really doesn’t gel with the actual focuses of CEA… you’re supposed to empower us to work for/fundraise for or otherwise support charities
An organisation that aims to effect Y via X cannot afford to relegate X to an afterthought, or largely ignore the views of people strongly involved with X.
My understanding of the phrase “I view the community as CEA’s team, not its customers” is that CEA’s ultimate goal is to improve the world, and increasing the satisfaction of the EA community (or alternatively, satisfying any particular request an individual might have) is not the ultimate goal. I believe the purpose of laying this out is to be transparent and help readers understand and predict how CEA will act. My guess is that very often we will be improving the world by doing things that satisfy the EA community.
For the Forum in particular, user feedback is a vital input into how we prioritize our work. We gather this information via user interviews (such as at events, reaching out to specific groups of people while developing features, and broadly offering to do user interview calls with people like in my Forum profile), by including links to feedback forms when testing things out and launching new features, publishing posts and quick takes about our work, running various surveys including the annual Forum user survey, and even directly messaging users via the Forum to ask them questions. I genuinely believe that feedback is a gift, and I’m so grateful for people who take the time to provide it to us.
If you take one thing away from my comment, please remember that we love feedback—there are multiple ways to contact us listed here, including an anonymous option. You’re welcome to contact us with suggestions, questions, bug reports, feedback[1], etc. (I can only really speak for the Forum team, but I would guess other teams feel similarly.)
Earlier this year we implemented the ability to import Google Docs to the Forum and people gave us lots of positive feedback about that. I think most of the work on the Forum will be somewhere between “making the community happy” and “the community is mostly neutral, maybe a small subset are happy”—if you look at the features in our latest update post, I think basically all of them have been either requested by users or people have given us purely positive feedback on them[2]. One example of a change to the Forum that the EA community might have voted against is the big Forum redesign in 2023 - as you can see, we mostly got negative feedback about it. However, when I’ve interviewed users new to the site, I overwhelmingly get positive feedback about the design. It’s clear to me that having a skilled designer improve the site’s usability was the right choice.
This reflects how I view my own work—to do good by supporting the EA community, which does not always mean that we should do what they would vote for[3].
I think some of the disagreement is that people interpret the terms “team” and “customers” differently. In some ways we do treat Forum users as customers—for example, our engineers rotate being on-call to respond to customer service requests. We think this is worth their time because we feel that our users provide significant value for the world, not because our end goal is a high customer satisfaction score, but the result is basically the same. As I referenced earlier, our team functions similarly to other tech teams. So for example, when we are building a feature for group organizers, we will do many user interviews with group organizers. Thinking about my own experience as a customer, oftentimes websites will use dark patterns, compromise UX, prioritize engagement/addictiveness, and literally outright lie, all in order to maximize their profit. I am happy that we do not treat our users as customers in any of these ways. One slightly different way of thinking about “customer” is more like “customer service”, where an organization should strive to satisfy any individual who files a complaint. Honestly I think the Forum team is pretty good at this given our small size, but I would like us to be able to prioritize issues that users report relative to the value of our other potential work and not automatically file customer service reports in the highest priority bucket.
I like the term “team” because that emphasizes that we all broadly have the same goal (improving the world) and I am happy for Forum users to act in service of that goal (even if they criticize my work), in the same way that I appreciate when users give me feedback about the Forum in a way that reflects understanding of that shared goal (like, “I have this suggestion for you, though I’m guessing that this wouldn’t affect many people so it’s probably low priority”). In practice, much of the way that the Forum makes progress on that goal is by “empowering [people] to work for/fundraise for or otherwise support charities.” Another aspect of “team” I like is that this implies collaboration and transparency, since we have shared goals (so it would be against my interests to lie), whereas I think it’s entirely normal/expected for a company to mislead its customers[4]. “Team” means that we respect your time more than other websites (that treat you like customers) do, because we believe your time is valuable (for the world) and we want you to use it well, because we have shared goals. When someone answers my inactive user feedback form saying that they use the Forum less now because they are focused on doing good directly via their job, I don’t feel like I have “lost a customer”. I feel happy that they are presumably correctly valuing their time and doing more good (although I hope they still occasionally return to contribute back to the community).
A point that multiple commenters reference is about how CEA handles criticism. In my opinion, someone who is on the same team as you is much more likely to take your criticism seriously than any entity to which you are a customer. For example, if I complain to a company about their shady business practices, I expect them to completely ignore me or possibly lie to me, but certainly not to actually consider my point. If you complain to the Forum team about something we are doing that you consider morally dubious, we actually engage with it (at least internally—we have not always done as well as I would like at responding publicly, and I hope we improve on this in the future.)
Given this, I personally disagree that we “relegate the EA community to an afterthought” and that we “largely ignore the views of people strongly involved with EA”, and I disagree that we implied that we plan to do these things in the future. In my opinion, viewing the EA community as CEA’s “team” does not preclude us from caring about our effect on the community, nor does it mean that we no longer want to nurture and support the community, nor does it imply that we will ignore criticism, nor does it mean that we don’t care about people’s opinion of our work. I would go so far as to say those are more important for a teammate to care about than a company to care about.
…that isn’t the transparency I’m concerned about. Just a general commitment to sharing info guiding key decisions about the community with the community…
I believe the purpose of Zach’s post was to explain that CEA will focus on EA principles rather than specific cause areas, and that it was not meant to communicate anything about CEA’s principles as an organization. Personally I am quite pro-transparency and hope to post more about my work than has been the case in the past.
To respond to some specific points:
sharing forum changes they’re considering, and the case for/against them
I’m happy to do more of this myself. Some reasons that I do not prioritize this:
Lack of demand (I appreciate you sharing what you would like to see from us! It’s hard to know what is worth us writing about otherwise. For example, it’s not clear to me if anyone got any value out of this data-sharing post and it took me a fair amount of time to put it all together.)
I believe that I have a bias towards thinking that the Forum is valuable/important, and so I try to counter that in various ways. In this case, because I care a lot about the Forum respecting people’s time, I want to push back on assuming that Forum-related questions are valuable/important enough to be worth their attention. We just ran a Forum user survey which was quite long—I spent a long time iterating on the text/questions and cutting things down, and in the end I was still pretty worried about asking for too much time. As a tech team we already prioritize work based on user feedback, so additional feedback gathered from a public post will also have diminishing returns.
Smaller things, like the fact that I’m quite busy and am a slow writer, and I find publishing things on the Forum pretty scary.
We shared a public version of our half-quarter OKR planning doc in our Forum update post. That doc gets updated right after we finalize our OKRs, and is currently the closest thing to this that exists.
open-sourcing forum data (IIRC they might technically have done this? But with no documentation, and an API that you have to direct-link to)
Our codebase is open source, and I personally think the documentation is quite good. We use GraphQL which is a commonly used technology. If you have questions about accessing data, feel free to contact us.
generally cultivating a culture of directly engaging in discussion with the community more
To this end, I will publicly suggest that if you have any questions for CEA, you should feel free to contact us.
Including critical feedback! Every time I talk to a user I emphasize that critical feedback is especially useful for us, because people are biased towards saying nice things to us (at least to our face—I think this is less the case online).
I actually don’t know of any particular requests or feedback after the fact that we got about site performance improvements, but I am confident that it was worth doing. Improving site speed is one of the most evidence-based ways for a site to decrease their bounce rate and improve their SEO ranking. This type of issue, which either minorly inconveniences many people or disproportionately impacts people who are not Forum users but would have been, is hard to justify working on purely based on the goal of “community satisfaction”, but makes more sense under the goal of “improving the world”.
To be clear, I think any organization has incentives against being 100% transparent, and I don’t think CEA is at the ideal level of transparency. But when I compare my time working in for-profit companies to my time working at CEA, it’s pretty stark how much more the people at CEA care about communicating honestly. For example, in a previous for-profit company, I was asked to obfuscate payment-related changes to prevent customers from unsubscribing, and no one around me had any objection to this.
Thanks for sharing your experience of working on the Forum Sarah. It’s good to hear that your internal experience of the Forum team is that it sees feedback as vital.
I hope the below can help with understanding the type of thing which can contribute to an opposing external impression. Perhaps some types of feedback get more response than others?
If you take one thing away from my comment, please remember that we love feedback—there are multiple ways to contact us listed here, including an anonymous option.
AFAICT I have done this twice, once asking a yes/no question about unclear forum policy and once about a Forum team post I considered mildly misleading. The first got no response, the other got a response which was inaccurate, which was unfortunate, though I certainly assume it was unintentionally so.
I want to be clear that I do not think I am entitled to get a response. I think the Forum team is entitled to decide it should focus on analytics not individuals, for example. I basically thought it had, and so mentally wrote off those pathways. But your comment paints a surprisingly different picture and repeatedly pushes these options, so it didn’t feel right to say that I disagree without disclosing a big part of why I disagree.
Looking to public, and frankly far more important, examples of this, the top comment on CEA’s last fundraising attempt is highly critical of the Forum / Online team’s direction and spend. At time of writing the comment has 23⁄2 agree/disagree votes and more karma than the top level post it’s under. This seems like the kind of thing one prioritises responding to if trying to engage, and 10 months ago Ben West responded “I mostly want to delay a discussion about this until the post fully dedicated to the Forum”. That post never came out[1]. So again my takeaway was that the Forum team didn’t value such engagement.
Given this, I personally disagree that we “relegate the EA community to an afterthought” and that we “largely ignore the views of people strongly involved with EA”, and I disagree that we implied that we plan to do these things in the future.
As someone who directionally agrees with the quoted sentiments, this was helpful in clarifying part of what’s going on here. I personally think that CEA has been opaque for the last few years, for better or for worse[2]. Others I have heard from think the same[3]. So I naturally interpret a post which is essentially a statement of continuinty as a plan to continue down this road. Arepo makes a similar point in the 2nd paragraph of their first comment. But if you think CEA, or at least your team, has been responsive in the past, the same statement of continuity is not naturally interpreted that way.
To the best of my knowledge. If it did, please link to it as a response to the comment! This type of thing is hard to search for, but I did spend ~5 minutes trying.
Since I’ve pushed CEA to be more responsive here and elsewhere, I want to note that distance is helpful in some contexts. I am unsurprised to hear that the Forum redesign in 2023 got negative feedback from entrenched users but positive feedback from new users, for example; seems a common pattern with design changes.
I think that OP / CEA board members haven’t particularly focused on / cared about being open and transparent with the EA community....Remember that OP staff members are mainly accountable to their managers, not the EA community or others. CEA is mostly funded by OP, so is basically similarly accountable to high-level OP people.
(Again: only speaking for myself, and here in particular I will avoid speaking about or for other people at CEA when possible.)
I hope the below can help with understanding the type of thing which can contribute to an opposing external impression.
Yup, I think it’s very reasonable for people outside of CEA to have a different impression than I do. I certainly don’t fault anyone for that. Hopefully hearing my perspective was helpful.
The first got no response, the other got a response which was inaccurate
I’m really sorry that our team didn’t properly respond to your messages. There are many factors that could affect whether or not any particular message got a response. We currently have a team assistant who has significantly improved how we manage incoming messages, so if you sent yours before she joined, I would guess someone dropped it by accident. As an engineer I know I have not always lived up to my own standards in terms of responding in a timely manner and I do feel bad about that. While I still think we do pretty good for our small size, I’m guessing that overall we are not at where I would personally like for us to be.
Looking to public, and frankly far more important, examples of this, the top comment on CEA’s last fundraising attempt is highly critical of the Forum / Online team’s direction and spend. At time of writing the comment has 23⁄2 agree/disagree votes and more karma than the top level post it’s under. This seems like the kind of thing one prioritises responding to if trying to engage, and 10 months ago Ben West responded “I mostly want to delay a discussion about this until the post fully dedicated to the Forum”. That post never came out[1]. So again my takeaway was that the Forum team didn’t value such engagement.
Hmm I currently don’t recall any post about Forum fundraising. I think we considered fundraising for the Forum, but I don’t remember if any significant progress was made in developing that idea. In my opinion, Ben and Oscar wrote multiple detailed replies to that comment, though I am sympathetic to the take that they did not quite respond to Nuno’s central point. I think this is just a case of, things sometimes fall through the cracks, especially during times of high uncertainty as was the case in this example. I feel optimistic that, with more stability and the ability to plan for longer futures, CEA will do better.
I also want to differentiate between public and internal engagement. I read Nuno’s writing and discussed it with my colleagues. At the time I didn’t necessarily think I would have better answers than Ben so I didn’t feel the need to join the public conversation, but at this point I probably do have better answers. I’ll just broadly say that, I agree that marginal value is what matters, as do others on my team. We do analyze the marginal impact of our Forum work. I would be excited to write more about it publicly but it will take a fair amount of work to make it clear and comprehensible for the Forum audience (up to my personal standards). Interestingly, Nuno’s points push me against taking the time to communicate publicly / be more open. Every hour I spend on writing a comment (and it can take me hours—I am not particularly good at writing, my training is in software engineering) is an hour that I don’t know how to value in the marginal impact analysis, so it defaults to being worth $0[1]. I strongly feel responsible for using EA/charitable money well, so using my work time to do something that I ultimately won’t put any value on is difficult.
I personally think that CEA has been opaque for the last few years
I don’t disagree with this. I personally would prefer that we had communicated publicly more in the past, and I think ideally CEA would be more open about our work.
So I naturally interpret a post which is essentially a statement of continuinty as a plan to continue down this road.
I’ll just note that the point of this post was not to lay out all of CEA’s upcoming plans, nor explain how CEA will change, nor even to talk about CEA’s organizational values or principles. I believe Zach has more posts planned, but he is also very busy.
But if you think CEA, or at least your team, has been responsive in the past, the same statement of continuity is not naturally interpreted that way.
Apologies—to clarify, I don’t think I said that CEA or my team has been responsive in the past. I’m guessing that on average CEA and my team have been below my personal bar. I feel that the Forum team aims to be responsive, and it is good to continue to have that goal, and to continue to do better relative to that goal (such as by getting help from our team assistant). My dissertation about “team”, similarly, doesn’t mean that we have been great about following through on all the ideals that “team” implies. I just think that it is an accurate description of our goals, and what I personally aspire to do. Based on Zach’s comment, I’m optimistic that CEA will do better.
I’m open to suggestions here. Perhaps transparency can be modeled as worth a fraction of the overall value CEA (or the Online Team, or the Forum) produces? But surely there are diminishing returns at some point—I would be surprised if I should be spending 50% of my work time on activities that are primarily valued via “transparency”. I’m worried that this is so subjective that I would just use it to justify spending as much time as I would like on these activities. If I was allowed to ignore cost effectiveness I would naturally be more open.
I think we’re pretty close to agreement, so I’ll leave it here except to clarify that when I’ve talked about engaging/engagement I mean something close to ‘public engagement’; responses that the person who raised the issue sees or could reasonably be expected to see. So what you’re doing here, Zach elsewhere in the comments, etc.
CEA discussing internally is also valuable of course, and is a type of engagement, but is not what I was trying to point at. Sorry for any confusion, and thanks for differentiating.
when they’re considering buying mansions in the Oxford countryside/other controversial multimillion dollar calculations, publishing the cost-benefit calculation rather than merely asserting its existence
Huh? That wasn’t CEAs decision, they just fiscally sponsored Wytham
IIRC it was done under the name ‘CEA’ when that name covered both the current org and what is now ‘Effective Ventures’. It was done at the impetus of a trustee of CEA-EV who, since they were the same legal entity, was also a trustee of CEA-CEA (I believe it’s still true that they’re currently the same organisation, CEA-CEA’s plans to spin off notwithstanding). I can’t find the initial announcement from CEA, but the justification was to host EA events and conferences there. Since by far the primary EA-event-and-conference-hosting organisation is CEA-CEA, it seems likely they were the primary beneficiary of the purchase.
I’m not really sure whether this technically qualifies as ‘only fiscally sponsoring Wytham’ (I doubt there’s a simple yes-no answer to the question), but there’s clearly a lot of entanglement with the organisation and people who a) are supposed to represent the EA community and b) benefited from the project. Even/especally if this entanglement is all perfectly innocent and well thought through, greater transparency would have made that more obvious and prevented much of the consequent muckraking of the movement by its critics.
I think it’s super reasonable for people to be confused about this. EV is a ridiculously confusing entity (or rather, set of entities), even without the name change and overlapping names.
I wouldn’t consider Wytham to have ever been a part of the project that’s currently known as CEA. A potential litmus test I’d use is “Was Wytham ever under the control of CEA’s Executive Director?” To the best of my knowledge, the answer is no, though there’s a chance I’m missing some historical context.
This comment also discusses this distinction further.
I’m nigh-certain that Wytham was never under the control of CEA’s Executive Director.
I think that this litmus test is pretty weak, though, as a response to Arepo’s suggestion that CEA was the primary beneficiary of Wytham. However, I also think that this suggestion is mistaken. I believe that CEA hosted <10% of the events at Wytham (maybe significantly less; I don’t know precisely, and am giving 10% as a round threshold that I’m relatively confident using as an upper bound).
In CEA’s case in particular, it doesn’t seem like they deal with biohazards or AI safety at a level necessitating high security
Agreed.
Regarding some of the specific points you’ve made:
• I agree that it would be great to get the community more involved in thinking through what the forum should look like. • Wytham Abbey was an independently run project that they just fiscally sponsored. • I agree that funding sources should be public (although perhaps not individual donations below a certain amount). • Unsurprised PELTIV backfired. • I would love to see regular community office hours, though if these end up seeing low demand, or it’s just the same folks over and over, I think it would be reasonable for them to decide to discontinue this.
Regarding some of the other things, I honestly don’t see them as the highest priority, especially right now.
I wouldn’t say they’re all top priority right now either fwiw. What I’d like is some kind of public commitment to stuff like this as at least nice-to-haves, rather than something they seem to feel no obligation about at all. That’s all any of these ‘principles’ can be—a directional statement about culture. But CEA has been around for over a decade, with an average annual budget that must be well into the millions, so even ‘not top priority’ concerns could easily have been long since addressed if they’d had a historical interest in doing so.
I’m not sure I agree with that characterisation of Wytham Abbey. It was orchestrated by one of the trustees of the org on behalf of the org, with intended beneficiaries being more or less a subset of the org’s proxy beneficiaries. And this was done under their current moniker, which per agb/Jason’s comment elsewhere in this discussion, is highly misleading—especially when they’re involved in projects like this. Consequently, when Wytham Abbey became a PR disaster, it helped bring the whole movement into disrepute. Arguably the main lesson was just ‘don’t use the public face of EA for black box projects’, but I think the backup lesson was ‘if you do, at least show enough of your working to prove to reasonable critical observers that it isn’t a backdoor way of giving the trustees a summer home.’
I guess I want CEA to focus very heavily on figuring out their overall strategy, including community engagement and then communicating their overall decisions.
Conference cost breakdowns feels like an unnecessary distraction at this point, so long as they satisfy the auditor.
It would have made sense for there to be a bit more discussion about ethical side-constraints, but including transparency in the list of core principles would honestly be just weird because transparency isn’t distinctly EA. Beyond that, the importance of transparency is significantly complicated by the concept of infohazards in areas like biohazards or AI safety. I really don’t see it as CEA’s role to take a side in these debates. I think it makes sense for CEA to embrace transparency as a key organisational value, but it’s not a core principle of EA in general and we should accept that different orgs will occupy different positions on the spectrum.
I agree that absolute transparency is not ideal. That said, there is a version of transparency (i.e ‘reasoning transparency’) that is a somewhat distinct EA value.
[T]hese seem to be exactly the same principles CEA has stated for years. If nothing about them is changing, then it doesn’t give much reason to think that CEA will improve in areas it has been deficient to date. To quote probably-not-Albert-Einstein, ‘Insanity is doing the same thing over and over again and expecting different results.’
I really really wish ‘transparency’ would make the list again (am I crazy? I feel like it was on a CEA list in some form in the early days, and then was removed). I think there are multiple strong reasons for making transparency a core principle:
There’s a distinction between what an organization wants to achieve and how it wants to achieve it. The principles described in the original post are related to the what. They help us identify a set of shared beliefs that define the community we want to cultivate.
I think there’s plenty of room for disagreement and variation over how we cultivate that community. Even as CEA’s mission remains the same, I expect the approach we’ll use to achieve that mission will vary. It’s possible to remain committed to these principles while also continuing to find ways to improve CEA’s effectiveness.
I view transparency as part of the how, i.e. I believe transparency can be a tool to achieve goals informed by EA principles, but I don’t think it’s a goal in itself. Looking at the spectrum of approaches EA organizations take to doing good, I’m glad that there’s room in our community for a diversity of approaches. I think transparency is a good example of a value where organizations can and should commit to it at different levels to achieve goals inspired by EA principles, and as a result I don’t think it’s a principle that defines the community.
For example, I think it’s highly valuable for GiveWell to have a commitment to transparency in order for them to be able to raise funds and increase trust in their charity evaluations, but I think transparency may cause active harm for impactful projects involving private political negotiations or infohazards in biosecurity. Transparency is also not costless, e.g. Open Philanthropy has repeatedlypublished pieces on the challenges of transparency. I think it’s reasonable for different individuals and organizations in the EA community to have different standards for transparency, and I’m happy for CEA to support others in their approach to doing good at a variety of points along that transparency spectrum.
When it comes to CEA, I think CEA would ideally be more transparent and communicating with the community more, though I also don’t think it makes sense for us to have a universal commitment to transparency such that I would elevate it to a “core principle.” I think different parts of our work deserve different levels of transparency. For example:
I think CEA should communicate about programmatic goals, impacts, and major decisions, which we’ve done before (see e.g. here)—but I think we would ideally be doing more.
On the other end of the spectrum, there are some places where confidentiality seems like an obvious good to me, e.g. with some information that is shared with our Community Health Team. I don’t expect this will be a novel idea for most readers, but I think it’s useful to illustrate that even for CEA, transparency isn’t an unabated good.
Somewhere in between is something like the EAG admissions bar. We do share significant amounts of information about admissions, but as Amy Labenz (our Head of Events) has stated, we want to avoid situations where we share so much information that people can use it to game the admissions process. I think it’s worth us potentially investing more in similar meta-transparency around where we will and won’t expect to share information. I suspect the lack of total transparency will upset some members of the community (particularly those who aren’t admitted to our events), but I think the tradeoffs are plausibly worth it.
I find the principles themselves quite handwavey, and more like applause lights than practical statements of intent. What does ‘recognition of tradeoffs’ involve doing? It sounds like something that will just happen rather than a principle one might apply. Isn’t ‘scope sensitivity’ basically a subset of the concerns implied by ‘impartiality’? Is something like ‘do a counterfactually large amount of good’ supposed to be implied by impartiality and scope sensitivity? If not, why is it not on the list? If so, why does ‘scout mindset’ need to be on the list, when ‘thinking through stuff carefully and scrupulously’ is a prerequisite to effective counterfactual actions? On reading this post, I’m genuinely confused about what any of this means in terms of practical expectations about CEA’s activities.
I feel quite strongly that these principles go beyond applause lights and are substantively important to EA. Instead of going into depth on all of the principles, I’ll point out that many others have spent effort articulating the principles and their value, e.g. here, here, and here.
To briefly engage with some of the points in your comment and explain how I see these principles holding value:
Impartiality and scope sensitivity can exist independently of each other. Many contemporary approaches to philanthropy are highly data-driven and seek to have more impact, but they aren’t impartial with respect to their beneficiaries. As an example, the Gates Foundation’s US education program strikes me as an approach that is likely to be scope-sensitive without being impartial. They’re highly data-driven and want to improve US education as much as possible, but it seems likely to me that their focus on the US education as opposed to e.g. educational programs in Nigeria stems from Gates being in the US rather than an impartial consideration of all potential beneficiaries of their philanthropy.
I also think it’s possible to have impartiality without scope sensitivity. Animal shelters and animal sanctuaries strike me as efforts that reflect impartiality insofar as they value the wellbeing of a wide array of species, but they don’t try to account for scope sensitivity (e.g. corporate campaigns are likely to improve the lives of orders of magnitude more animals per dollar).
I agree that a scout mindset and recognition of tradeoffs are important tools for doing counterfactually large amounts of good. I also think they’re still wildly underutilized by the rest of the world. Stefan Schubert’s claim that the triviality objection is beside the point resonates with me. The goal of these principles isn’t to be surprising, but rather to be action-guiding and effective at inspiring us to better help others.
‘I view the community as CEA’s team, not its customers’ sounds like a way of avoiding ever answering criticisms from the EA community, and really doesn’t gel with the actual focuses of CEA
I think it’s important to view the quote from the original post in the context of the following sentence: “While we often strive to collaborate and to support people in their engagement with EA, our primary goal is having a positive impact on the world, not satisfying community members (though oftentimes the two are intertwined).” I believe the goals of engaged community members and CEA are very frequently aligned, because I believe most community members strive to have a positive impact on the world. With that being said, if and when having a positive impact on the world and satisfying community members does come apart, we want to keep our focus on the broader mission.
I worry some from the comments in response to this post that people are concerned we won’t listen to or communicate with the community. My take is that as “teammates,” we actually want to listen quite closely to the community and have a two-way dialogue on how we can achieve these goals. With that being said, based on the confusion in the comments, I think it may be worth putting the analogy around “teammates” and “customers” aside for the moment. Instead, let me say some concrete things about how CEA approaches engagement with the community:
I believe the majority of CEA’s impact flows through the community. In recent years, our decision making has placed the most emphasis on metrics around the number of positive career changes people have made as a result of our programs. We think the community has valuable input to give us on how we can help them help others, and we use their input to drive decisions. We frequently solicit feedback for this purpose, e.g. via our recent forum survey, or the surveys we run after most of our events.
The ultimate beneficiaries of our work are groups like children who would otherwise die from malaria, chickens who would otherwise suffer in cages, and people who might die or not exist due to existential catastrophes. I think these are populations that the vast majority of the EA community is concerned about as well. I see us as collaborating to achieve these goals, and I think CEA is best poised to achieve them by empowering people who share core EA principles.
While I think most people in EA would agree with the above goals, I do think at times that meta organizations have erred too far in the direction of trying to optimize for community satisfaction. I think this was particularly true during the FTX boom times, when significant amounts of money were spent in ways that, to my eyes, blurred the lines between helping the community do more good and just plain helping the community. See e.g. theseposts for some historical discussion.
Concretely, this affects how we evaluate CEA’s impact. For example, for events, our primary focus is on metrics like how many positive career changes occur as a result of our events, as opposed to attendee satisfaction. We do collect data on the latter and treat it as a useful input for our decision-making. Among other reasons, we believe it’s helpful because we think one of the things that satisfies many community members is when we help them improve their impact! But it’s an input, not the thing we’re optimizing for. We have made decisions that may make our events less of a pleasant experience (e.g. cutting back on meals and snack variety) but ultimately think that we can use these funds better elsewhere or that our donors can instead not give to CEA and redirect the funding to beneficiaries that both they and we care about.
Sometimes, approaches to serving different parts of the community are in tension with each other. To return to EAG admissions, I think Eli Nathan does a good job in this comment discussing how we both incorporate stakeholder feedback but don’t optimize for making the community happy. Sometimes we have to make tough decisions on tradeoffs between how we support different parts of the community, and we’ll use a mix of community input and our own judgment when doing so.
I think if anyone was best able to make a claim to be our customers, it would be our donors. Accountability to the intent behind their donations does drive our decision-making, as I discussed in the OP. I think it’s also important to note that I don’t perceive this to be a change from CEA’s historical practices (if anything, I think this dynamic has become less pronounced with recent changes at Open Philanthropy and CEA, although I still am very unsure how it will shake out in the long run).
I still want us to invest more in communicating with the community. I suspect you and I have different takes on what we feel like the optimal level of communication and transparency is, but I do agree that CEA should directionally be communicating more. Our main bottleneck to doing so right now is bandwidth, not desire. (We’re exploring ways to reduce that bottleneck but don’t want to make promises.) I think it’s a good thing when we engage more, and I’m supportive of efforts from our team to do so, whether that’s through proactive posts from us or engaging with community critiques. The desire to be transparent was one of the original inspirations for doing this principles-first post.
I think the principles-first approach is good at recognizing the diversity of perspectives in our community and supporting individual community members in their own journey to do good. We regularly have forum posts, event attendees and speakers, and group members whose cause prioritization reflects choices I disagree with. I think that’s good!
With that being said, if and when having a positive impact on the world and satisfying community members does come apart, we want to keep our focus on the broader mission.
I understand the primary concern posed in this comment to be more about balancing the views of donors, staff, and the community about having a positive impact on the world, rather than trading off between altruism and community self-interest. To my ears, some phrases in the following discussion make it sound like the community’s concerns are primarily self-interested: “trying to optimize for community satisfaction,” “just plain helping the community,” “make our events less of a pleasant experience (e.g. cutting back on meals and snack variety),” “don’t optimize for making the community happy” for EAG admissions).
I don’t doubt that y’all get a fair number of seemingly self-interested complaints from not-satisfied community members, of course! But I think modeling the community’s concerns here as self-interested would be closer to a strawman than a steelman approach.
I think if anyone was best able to make a claim to be our customers, it would be our donors.
CEA receives many fewer resources from its donors than from the community. Again, CEA would not really have a job without the community. An organization like CEA would totally exist without your big donors (like, the basic institution of having an “EA leadership organization” requires a few hundred k per year, which you would be able to easily fundraise from a very small fraction of the community, and even at the current CEA burn-rate the labor-value of the people who are substantially directing their life based on the broader EA community vastly eclipses the donations to CEA).
Your donors seem obviously much less important of a stakeholder than the community which is investing you with the authority to lead.
First off, I want to thank you for taking what was obviously a substantial amount of time to reply (and also to Sarah in another comment that I haven’t had time to reply to). This is, fwiw, is already well above the level of community engagement that I’ve perceived from most previous heads of CEA.
On your specific comments, it’s possible that we agree more than I expected. Nonetheless, there are still some substantial concerns they raise for me. In typical Crocker-y fashion, I hope you’ll appreciate that me focusing on the disagreements for the rest of this comment doesn’t imply that they’re my entire impression. Should you think about replying to this, know that I appreciate your time, and I hope you feel able to reply to individual points without being morally compelled to respond to the whole thing. I’m giving my concerns here as much for your and the community’s information as with the hope of a further response.
> I view transparency as part of the how, i.e. I believe transparency can be a tool to achieve goals informed by EA principles, but I don’t think it’s a goal in itself.
In some sense this is obviously true, but I believe it’s gerrymandering what the difference between ‘what’ and ‘how’ actually is.
For example, to my mind ‘scout mindset’ doesn’t seem any more central a goal than ‘be transparent’. In the post by Peter you linked, his definition of it sounds remarkably like ‘be transparent’, to wit: ‘the view that we should be open, collaborative, and truth-seeking in our understanding of what to do’.
One can imagine a world where we should rationally stop exploring new ideas and just make the best of the information we have (this isn’t so hard to imagine if it’s understood as a temporary measure to firefight urgent siutations), and where major charities can make substantial decisions without explanation and this tend to produce trustworthy and trusted policies—but I don’t think we live in either world most of the time.
In the actual world, the community doesn’t really know, for example with what weighting CEA priorities longtermist causes over others; how it priorities AI vs other longtermist causes, how it runs admissions at EAGs,;why some posts get tagged as ‘community’ on the forum, and therefore effectively suppressed while similar ones stay at the top level; why the ‘community’ tag has been made admin-editable-only; what the region pro rata rates CEA uses when contracting externally; what your funding breakdown looks like (or even the absolute amount); what the inclusion criteria for ‘leadership’ forums is, or who the attendees are; or many many other such questions people in the community have urgently raised. And we don’t have any regular venue for being able to discuss such questions and community-facing CEA policies and metrics with some non-negligible chance of CEA responding—a simple weekly office hours policy could fix this.
> confidentiality seems like an obvious good to me, e.g. with some information that is shared with our Community Health Team
Confidentiality is largely unrelated to transparency. If in any context someone speaks to someone else in confidence, there have to be exceptionally good reasons for breaking that confidence. None of what I’m pointing at in the previous paragraph would come close to asking them to do that.
> Amy Labenz (our Head of Events) has stated, we want to avoid situations where we share so much information that people can use it to game the admissions process.
I think this statement was part of the problem… We as a community have no information on which to evaluate the statement, and no particular reason to take it at face value. Are there concrete examples of people gaming the system this way? Is there empirical data showing some patterns that justify this assertion (and comparing it to the upsides)? I know experienced EA event organisers who explicitly claim she’s wrong on this. As presented, Labenz’s statement is in itself a further example of lack of transparency that seems not to serve the community—it’s a proclamation from above, with no follow-up, on a topic that the EA community would actively like to help out with if we were given sufficient data.
This raises a more general point—transparency doesn’t just allow the community to criticise CEA, but enables individuals and other orgs to actively help find useful info in the data that CEA otherwise wouldn’t have had the bandwidth to uncover.
> I think transparency may cause active harm for impactful projects involving private political negotiations or infohazards in biosecurity
These scenarios get wheeled out repeatedly for this sort of discussion (Chris Leong basically used the same ones elsewhere in this thread), but I find them somewhat disingenuous. For most charities, including all core-to-the-community EA charities, this is not a concern. I certainly hope CEA doesn’t deal in biosecurity or international politics—if it does, then the lack of transparency is much worse than I thought!
> Transparency is also not costless, e.g. Open Philanthropy has repeatedly published pieces on the challenges of transparency
All of the concerns they list there apply equally to all the charities that Givewell, EAFunds etc expect to be transparent. I see no principled reason in that article why CEA, OP, EA Funds, GWWC or any other regranters should expect so much more transparency than they’re willing to offer themselves. Briefly going through their three key arguments:
‘Challenge 1: protecting our brand’ - empirically I think this is something CEA and EV have substantially failed to do in the last few years. And in most of the major cases (continual failure for anyone to admit any responsibility for FTX; confusion around Wytham Abbey—the fact that that was ‘other CEA’ notwithstanding; PELTIV scores and other elitism-favouring policies; the community health team not disclosing allegations against Owen (or more politic-ly ‘a key member of our organisation’) sooner; etc) this was explicitly bad feeling over lack of transparency. I think publishing somee half-baked explanations that summarised the actual thinking of these at the time (rather than when in response to them later being exposed by critics) would a) have given people far less to complain about, and b) possibly generated (kinder) pushback from the community that might have averted some of the problem as it eventually manifested. I have also argued that CEA’s historical media policy of ‘talk as little as possible to the media’ both left a void in media discussion of the movement that was filled by the most vociferous critics and generally worsened the epistemics of the movement.
‘Challenge 2: information about us is information about grantees’ - this mostly doesn’t apply to CEA. Your grantees are the community and community orgs, both groups of whom would almost certainly like more info from you. (it also does apply to nonmeta charities like Givedirectly, who we nonetheless expect to gather large amounts of info on the community they’re serving—but in that situation we think it’s a good tradeoff)
‘Challenge 3: transparency is unusual’ - this seems more like a whinge than a real objection. Yes, it’s a higher standard than the average nonprofit holds itself to. The whole point of the EA movement was to encourage higher standards in the world. If we can’t hold ourselves to those raised standards, it’s hard to have much hope that we’ll ever inspire meaningful change in others.
> I also think it’s possible to have impartiality without scope sensitivity. Animal shelters and animal sanctuaries strike me as efforts that reflect impartiality insofar as they value the wellbeing of a wide array of species, but they don’t try to account for scope sensitivity
This may be quibbling, but I would consider focusing on visible subsets of the animal population (esp pets) a form of partiality. This particular disagreement doesn’t matter much, but it illustrates why I think gestures towards principles that are really not that well defined is that helpful for giving a sense of what we can expect CEA to do in future.
> “While we often strive to collaborate and to support people in their engagement with EA, our primary goal is having a positive impact on the world, not satisfying community members (though oftentimes the two are intertwined).”
I think this politicianspeak. If AMF said ‘our primary goal is having a positive impact on the world rather than distributing bednets’ and used that as a rationale to remove their hyperfocus on bednets, I’m confident a) that they would become much less positive on the world, and b) that Givewell would stop recommending them for that reason. Taking a risk on choosing your focus and core competencies is essential to actually doing something useful—if you later find out that your core competencies aren’t that valuable then you can either disband the organisation, or attempt a radical pivot (as Charity Science’s founders did on multiple occasions!).
> I think this was particularly true during the FTX boom times, when significant amounts of money were spent in ways that, to my eyes, blurred the lines between helping the community do more good and just plain helping the community. See e.g. these posts for some historical discussion … We have made decisions that may make our events less of a pleasant experience (e.g. cutting back on meals and snack variety)
I think this along with the transparency question is our biggest disagreement and/or misunderstanding. There’s a major equivocation going on here between exactly *which* members of the community you’re serving. I am entirely in favour of cutting costs at EAGs (the free wine at one I went to tasted distinctly of dead children), and of reducing all-expenses-paid forums for ‘people leading EA community-building’. I want to see CEA support people who actually need support to do good—the low-level community builders with little to no career development, esp in low or middle income countries whose communities are being starved; the small organisations with good track records but such mercurialfunding; all the talented people who didn’t go to top 100 universities and therefore get systemically deprioritised by CEA. These people were never major beneficiaries of the boom, but were given false expectations during it and have been struggling in the general pullback ever since.
> For example, for events, our primary focus is on metrics like how many positive career changes occur as a result of our events, as opposed to attendee satisfaction.
I think the focus would be better placed on why attendees are satisfied or dissatisfied. If I go to an event and feel motivated to work harder in what I’m already doing, or build a social network who make me feel better enough about my life that I counterfactually make or keep a pledge, these things are equally as important. There’s something very patriarchal about CEA assuming they know better what makes members of the community more effective than the members of the community do. And, as any metric, ‘positive career changes’ can be gamed, or could just be the wrong thing to focus on.
> I think if anyone was best able to make a claim to be our customers, it would be our donors. Accountability to the intent behind their donations does drive our decision-making, as I discussed in the OP.
If both CEA and its donors are effectiveness-minded, this shouldn’t really be a distinction—per my comments about focus above, serving CEA’s community is about the most effective thing an org with a community focus can do, and so one would hope the donors would favour it. But also, this argument would be stronger if CEA only took money from major donors. As is, as long as CEA accepts donations from the community, sometimes actively solicits it, and broadly requires it (subject to honesty policy) from people attending EAGs—then your donors are the community and hence, either way, your customers.
(I work on the Forum but I am only speaking for myself.)
To respond to some bits related to the Forum:
In the actual world, the community doesn’t really know… why some posts get tagged as ‘community’ on the forum, and therefore effectively suppressed while similar ones stay at the top level
If you’re referring to “why” as in, what criteria is used for determining when to tag a post as “Community”, that is listed in the Community topic page. If you’re referring to “why” as in, how does that judgement happen, this is done by either the post author or a Forum Facilitator (as described here).
In the actual world, the community doesn’t really know… why the ‘community’ tag has been made admin-editable-only
We provided a brief explanation in this Forum update post. The gist is that we would like to prevent misuse (i.e. people applying it to posts because they wanted to move them down, or people removing it from posts because they wanted to move them up).
Thank you for flagging your interest in this information! In general we don’t publicly post about every small technical change we make on the Forum, as it’s hard to know what people are interested in reading about. If you have additional questions about the Forum, please feel free to contact us.
In general, our codebase is open source so you’re welcome to look at our PRs descriptions. It’s true that those can be sparse sometimes — feel free to comment on the PR if you have questions about it.
we don’t have any regular venue for being able to discuss such questions and community-facing CEA policies and metrics with some non-negligible chance of CEA responding—a simple weekly office hours policy could fix this.
If you have questions for the Forum team, you’re welcome to contact us at any time. I know that we have not been perfect at responding but we do care about being responsive and do try to improve. You can DM me directly if you don’t get a response; I am happy to answer questions about the Forum. I also attend multiple EAG(x) conferences each year and am generally easy to talk to there—I take a shift at the CEA org fair booth (if I am not too busy volunteering), and fill my 1:1 slots with user interviews asking people for feedback on the Forum. I think most people are excited for others to show an interest in their work, and that applies to me as well! :)
> “While we often strive to collaborate and to support people in their engagement with EA, our primary goal is having a positive impact on the world, not satisfying community members (though oftentimes the two are intertwined).”
I think this politicianspeak. If AMF said ‘our primary goal is having a positive impact on the world rather than distributing bednets’ and used that as a rationale to remove their hyperfocus on bednets, I’m confident a) that they would become much less positive on the world, and b) that Givewell would stop recommending them for that reason. Taking a risk on choosing your focus and core competencies is essential to actually doing something useful—if you later find out that your core competencies aren’t that valuable then you can either disband the organisation, or attempt a radical pivot (as Charity Science’s founders did on multiple occasions!).
I personally disagree that it would be better for CEA to have a goal that includes a specific solution to their overarching goal. I think it is often the case that it’s better to focus on outcomes rather than specific solutions. In the specific case of the Forum team, having an overarching goal that is about having a positive impact means that we feel free to do work that is unrelated to the Forum if we believe that it will be impactful. This can take the shape of, for example, a month-long technical project for another organization that has no tech team. I think if our goal were more like “have a positive impact by improving the EA Forum” that would be severely limiting.
I also personally disagree that this is “politicianspeak”, in the sense that I believe the quoted text is accurate, will help you predict our future actions, and highlights a meaningful distinction. I’ll refer back to an example from my other long comment: when we released the big Forum redesign, the feedback from the community was mostly negative, and yet I believe it was the right thing to do from an impact perspective (as it gave the site a better UX for new users). I think there are very few examples of us making a change to the Forum that the community overall disagrees with, but I think it is both more accurate for us to say that “our primary goal is having a positive impact on the world”, and better for the world that that is our primary goal (rather than “community satisfaction”).
All of these activities sound like services provided to the EA community. [...] the same way Givedirectly is and should be judged by how effectively they serve their beneficiaries (e.g. Africans below the poverty line), CEA should be judged by how effectively it serves its effective beneficiaries by empowering them to do those things.
This doesn’t sound right to me. If you want to focus on the customer analogy, the funders are paying CEA to provide impact according to their impact metrics. CEA engages with a subset of the EA community that they think will lead to effects that they think will lead to impact according to their own theory of change and/or the ToC of the funder(s). Target groups can differ based on the ToC of project, so you see people engaging on the forum but being rejected from EAGs.
I think there is much room for criticism when looking more closely at the ToCs, which is more to your next point:
The movement was founded on Givewell/GWWC doing reviews of and ultimately promoting charities—reviews for which transparency is an absolute prerequisite for recommendation
It seems importantly hypocritical as a movement to demand it of evaluees but not to practice it at a meta level
Both Givewell and GWWC want to shift donation money to effective charities, which is why they have to make a compelling case for donors. Transparency seems to be a good tool for this. The analogy here would be CEA making the case for them to get funded for their work. Zach has written a bit about how they engage with funders.
I personally think there is a good case to be made to try for broader meta-funding diversification, which would necessitate more transparency around impact measurement. The EA Meta Funding Landscape Report asks some good questions. However, I can also see that the EV of this might be lower than that of engaging with a smaller set of funders. Transparency and engaging with a broad audience can be pretty time-consuming and thus lower the cost-effectiveness of your approach.
(All opinions are my own and don’t reflect those of the organisations I’m affiliated with.)
Right, the community isn’t the ultimate beneficiary of CEA’s work. It’s roughly analogous to donors who receive GiveWell advice—the ToC works instrumentally through the community/GW donors but impact is derived from positive effects on ultimate beneficiaries (generally children in Africa). Somewhat analogously, an object-level org creates impact through its employees, but employees are not beneficiaries of the org.
Both Givewell and GWWC want to shift donation money to effective charities, which is why they have to make a compelling case for donors. Transparency seems to be a good tool for this. The analogy here would be CEA making the case for them to get funded for their work. Zach has written a bit about how they engage with funders.
That undermines the first motivation for I gave for transparency, but I don’t think it really touches on the other four. And as you say, it only undermines the first to the extent that we don’t think it would be better that they get more diverse funding.
I think if only for feedback-loop reasons, it would be far better for CEA to get more from the community—if they’re struggling to do so, that could be considered an important form of feedback in itself.
However, I can also see that the EV of this might be lower than that of engaging with a smaller set of funders. Transparency and engaging with a broad audience can be pretty time-consuming and thus lower the cost-effectiveness of your approach.
I feel like this proves too much. Givewell’s potential donors could make exactly the same claim, but Givewell repeatedly reinforced their belief that greater transparency is necessary to have high credence that the organisation in question is doing a good job. The fact that CEA’s outputs are less concrete/measurable/directly tied to human welfare if anything makes me think it’s more important that feedback loops are tightened than for Givewell evaluands.
The “team” metaphor is ambiguous, and I think an accurate interpretation of it doesn’t answer many questions.
The community isn’t the team in the sense that CEA is the manager. The only plausible rationale for that would be a mandate from the community, and I think we can exclude that based on the community not being the “customers.”
Thus, CEA seems to be in a leaderless co-worker type relationship, or a leaderless sports team co-member relationship, with other EAs and EA orgs. That’s a loose sort of team, and often an ineffective one [add sports metaphor appropriate to your culture here as mine would be US-centric.] For those sorts of teams to be effective, there generally has to be a lot of give and take from a position of rough equality.
There are also “teams” where everyone kinda does their own thing with relatively little coordination. I’m thinking somewhat of toddlers engaged in mostly parallel play rather than truly playing together. A valid model, but they are unlikely to build a really cool tower of blocks that way!
Secondly, I find the principles themselves quite handwavey, and more like applause lights than practical statements of intent. What does ‘recognition of tradeoffs’ involve doing? It sounds like something that will just happen rather than a principle one might apply. Isn’t ‘scope sensitivity’ basically a subset of the concerns implied by ‘impartiality’? Is something like ‘do a counterfactually large amount of good’ supposed to be implied by impartiality and scope sensitivity? If not, why is it not on the list? If so, why does ‘scout mindset’ need to be on the list, when ‘thinking through stuff carefully and scrupulously’ is a prerequisite to effective counterfactual actions?
This poses some interesting questions, and I’ve thought about them a bit, although I’m still a bit confused.
Let’s start with the definition on effectivealtriusm.org, which seems broadly reasonable:
Effective altruism is a research field and practical community that aims to find the best ways to help others, and put them into practice.
So what EA does is:
find the best ways to help others
put them into practice
So, basically, we are a company with a department that builds solar panels and another that runs photovoltaic power stations using these panels. Both are related but distinct. If the solar panels are faulty, this will affect the power station, but if the power station is built by cutting down primal forest, then the solar panel division is not at fault. Still, it will affect the reputation of the whole organisation, which will affect the solar engineers.
But going back to the points, we could add some questions:
find the best ways to help others
How do we find the best ways to help?
Who are the others?
put them into practice
How do we put them into practice?
1.a seems pretty straightforward: If we have different groups working on this, then the less biased ones (using a scout mindset and being scope sensitive) and the ones using decision-making theories that recognize trade-offs and counterfactuals will fare better. Here, the principles logically follow from the requirements. If you want to make the best solar cells, you’ll have to understand the science behind them.
1.b Here, we can see that EA is based on the value of impartiality, but it is not a prerequisite for a group that wants to do good better. If I want to do the most good for my family, then I’m not impartial, but I still could use some of the methods EAs are using.
2.a Could be done in many different ways. We could commit massive fraud to generate money that we then donate based on the principles described in 1.
In conclusion, I would see EA as:
A research field that aims to find the best ways to help others
A practical community that aims to put the results of 1 into practice
Having worked in startups and finance, I can imagine that there might be ways in which EA ideas could be implemented without honesty, integrity, and compassion cost-effectively. Aside from the risks of this approach, I would also see dropping this value as leading to a very different kind of movement. If we’re willing to piss off the neighbours of the power plant, then this will affect the reputation of the solar researchers.
In describing the history of EA, we could include the different tools and frameworks we have used, such as ITN. But these don’t need to be the ones we’ll use in the future, so I see everything else as being downstream from the definition above.
Re-Reading Will MacAksill’s Defining Effective Altruism from 2019, I saw that he used a similar approach that resulted in four claims:
The ideas that EA is about maximising and about being science-aligned (understood broadly) are uncontroversial. The two more controversial aspects of the definition are that it is non-normative, and that it is tentatively impartial and welfarist.
He didn’t include integrity and collaborative spirit. However, he posted in 2017 that these two are among the guiding principles of CEA and other organisations and key people.
I wish I could be as positive as everyone else, but there are some yellow flags for me here.
Firstly, as Zachary said, these seem to be exactly the same principles CEA has stated for years. If nothing about them is changing, then it doesn’t give much reason to think that CEA will improve in areas it has been deficient to date. To quote probably-not-Albert-Einstein, ‘Insanity is doing the same thing over and over again and expecting different results.’
Secondly, I find the principles themselves quite handwavey, and more like applause lights than practical statements of intent. What does ‘recognition of tradeoffs’ involve doing? It sounds like something that will just happen rather than a principle one might apply. Isn’t ‘scope sensitivity’ basically a subset of the concerns implied by ‘impartiality’? Is something like ‘do a counterfactually large amount of good’ supposed to be implied by impartiality and scope sensitivity? If not, why is it not on the list? If so, why does ‘scout mindset’ need to be on the list, when ‘thinking through stuff carefully and scrupulously’ is a prerequisite to effective counterfactual actions? On reading this post, I’m genuinely confused about what any of this means in terms of practical expectations about CEA’s activities.
Thirdly, ‘I view the community as CEA’s team, not its customers’ sounds like a way of avoiding ever answering criticisms from the EA community, and really doesn’t gel with the actual focuses of CEA:
Most of the EA community doesn’t contribute to CEA in any material way
CEA lists its programs as ‘events, local groups, and an online forum’, and below that section, ‘community health’. All of these activities sound like services provided to the EA community. Admittedly the community largely doesn’t pay for these services, but that seems like a technicality trading on the fact that nonprofits don’t have literal customers. The community are effectively your beneficiaries—not in that you’re supposed to enrich us, but in that you’re supposed to empower us to work for/fundraise for or otherwise support charities. In the same way Givedirectly is and should be judged by how effectively they serve their beneficiaries (e.g. Africans below the poverty line), CEA should be judged by how effectively it serves its effective beneficiaries by empowering them to do those things.
Lastly, I really really wish ‘transparency’ would make the list again (am I crazy? I feel like it was on a CEA list in some form in the early days, and then was removed). I think there are multiple strong reasons for making transparency a core principle:
The movement was founded on Givewell/GWWC doing reviews of and ultimately promoting charities—reviews for which transparency is an absolute prerequisite for recommendation
It seems importantly hypocritical as a movement to demand it of evaluees but not to practice it at a meta level
Givewell themselves have been a model of transparency in their reasoning, value assumptions, etc, and not coincidentally one of the least criticised and most celebrated EA organisations.
Much of the sea of intra-EA criticism (including my own) that followed FTXgate and Wytham Abbeygate involved concerns about lack of transparency
If CEA do indeed view us as ‘[its] team, not its customers’, it’s hard for us to make useful decisions about how to contribute without knowing the rationale or context for their key decisions
I am very positive about the new batch of Effective Ventures trustees and the direction of independence CEA and other EV projects have taken, and I strongly hope that my concerns here turn out to be misplaced.
Note: I had drafted a longer comment before Arepo’s comment, given the overlap I cut parts that they already covered and posted the rest here rather than in a new thread.
I agree with Arepo that both halves of this claim seem wrong. Four of CEA’s five programs, namely Groups, Events, Online, and Community Health, have theories of change that directly route through serving the community. This is often done by quite literally providing them with services that are free, discounted, or just hard to acquire elsewhere. Sure, they are serving the community in order to have a positive impact on the wider world, but that’s like saying a business provides a service in order to make a profit; true but irrelevant to the question of whether the directly-served party is a customer.
I speculate that what’s going on here is:
CEA doesn’t want to coordinate the community the way any leader or manager would be expected to coordinate their team. That (a) seems like a quick path to groupthink and (b) would be hard given many members do not recognise CEA’s authority.
CEA also doesn’t want to feel responsible for making community members happy, because it feels the eternal critics that make up the community (hi!) will be unhappy regardless of what it does.
I’m sympathetic to both impulses, but if taken too far they leave the CEA <-> EA community relationship at an impasse and make the name ‘CEA’ a real misnomer. Regardless of preferred language, I hope that CEA will rediscover its purpose of nurturing and supporting the EA community by providing valuable services to its members[1] - a lower bar than ‘make these eternal critics happy’ - and I believe the short descriptions of those four teams quoted below already clearly point in that direction.
For me, this makes the served members customers, in the same sense that a parishioner is a customer of their church. Most businesses can’t make all prospective customers happy either! But if that fact makes them forget that their continued existence is contingent upon their ability to serve customers, then they are truly lost.
As I hope comes across, I do not think this is at all radical. But if CEA cannot or will not do this, I think it should change its name.
Note: I’m no longer at CEA, thoughts my own.
I feel kind of confused about the point you are making here. CEA is the Centre for Effective Altruism, not the Center for Effective Altruists. This is fairly different from many community building organizations; e.g. Berkeley Seniors’ mission is to help senior citizens in Berkeley per se (rather than advance some abstract idea which seniors residing in Berkeley happen to support).
I can’t tell if you
Disagree that CEA differs from many community building organizations in this way
Agree that it differs but disagree that it should
Agree that it differs but feel like this difference is small/pedantic and not worth highlighting
Agree that it differs but disagree that “customer vs. team” is a useful way to describe this difference
Something else?
I am not AGB, but it’s clear that a huge fraction of the power that CEA has comes from it being perceived as a representative of the EA community, and because the community empowered it to solve coordination problems between its members. That power is given conditional on CEA acting on behalf of the people who invested that power.
Sure, maybe CEA accepted those resources (and the expectations that came with that) with the goal of doing the most good, but de-facto CEA as an institution basically only exists because of its endorsement by the EA community, and the post as written seems to me like it basically is denying that power relationship and responsibility.
Lightcone in its stewardship of LW is in a very similar position. Our goal with LW is to develop an art of rationality and reduce existential risk, but as an institution we are definitely also responsible for optimizing for the goals of the other stakeholders who have invested in LessWrong (like the authors, commenters, Eliezer who founded the site, and the broader rationality community which has invested in LessWrong as a kind of town square). People would be really pissed if we banned long-term contributors to LW, even if we thought it was best by our own lights, and rightfully so. They have invested resources which make them a legitimate stakeholder in the commons that we are administering.
(there is some degree to which we do have leeway here because there is widespread buy-in for something like “Well Kept Gardens Die by Pacifism”, but that leeway comes from the fact that there is widespread buy-in for discretion-based moderation, and that buy-in does not exist for all forms of possible changes to LW)
Thanks! For what it’s worth, the thing you are describing seems consistent with describing EAs as “teammates” (I also think that sports teams are successful ~entirely because of the work of their constituent team members) but I concede that the term is vague.
[Edit: further explained and qualified in a new comment below.]
Agreed, although I would note that the application varies from function to function.
For instance, I don’t think it runs EAGs or funds EAGxes through power granted by the community. So I think CEA has considerably more room to do what it thinks best by its own lights when dealing with its events than in (e.g.,) operating the community health team.
I would put other core community infrastructure in a similar bucket as community health, at least to the extent it constitutes a function where coordination of effort is an important factor and CEA can be seen as occupying the field. For example, it makes sense to coordinate a single main Forum, a single sponsor of university groups at a particular university, etc.
Huh, EAG feels like one of the most obvious community-institutions. Like, it’s the central in-person gathering event of the EA community, and it’s exactly the kind of thing where you want to empower an organization to run a centrally controlled version of it, because having a Schelling-event is very valuable.
But of course, in empowering someone to do that, CEA accepts some substantial responsibility to organize the event with the preferences of the community in mind. Like, EAG is really hard to organize if you are not in an “official EA-representative” position, and a huge fraction of the complexity comes from managing that representation.
I could have been clearer that different CEA functions are on a continuum in their relationship with the community, rather than sounding more binary at points. Also, my view that CEA has more freedom around EAGs than certain other functions doesn’t mean I would assign no meaningful constraints.
That being said, I think the “desirability of empowering an organization to run a centrally controlled” function is probably necessary but not sufficient to rely on the community-empowerment narrative. Here, there are various factors that pull me toward finding a weaker obligation on CEA’s part—the obligation not to unfairly or inappropriately appropriate for its own objectives the assembling of many EAs in one city at one time in a way that deprives other actors of their opportunity to make a play for that external/community resource. In other words, I see a minimum duty to manage that resource in an interoperable and cooperative manner . . . but generally not a duty to allocate CEA’s own resources and programming decisions in a way that lines up with community preferences.
I don’t think there is anything that prevents an organization from running a conference, even a top-notch conference, by its own lights and without necessarily surrendering a significant amount of control to the community. One plausible narrative here is that CEA put on a top-notch conference that others couldn’t or didn’t match (backing from Open Phil and formerly FTXFF doubtless would help!) and that the centralizing elements are roughly the natural result of what happens when you put on a conference that is much better than the alternatives. In this narrative, there would be no implied deal that makes CEA largely the agent of the community in running EAG.
That strikes me as at least equally plausible on its face than one than a narrative in which the community “empower[ed]” CEA to run a conference with centralizing tendencies as long as the community retained sizable influence regarding how it is run. And given my desire to incentivize orgs to organize (and funders to fund) top-notch conferences, as well as a default toward the proper response to a conference you don’t like being organizing your own, I am inclined to make the natural-result narrative my starting point .
At the same time, I recognize the coordination work associated with EAGs—although I would specifically emphasize the coordination value of having a bunch of EAs in about the same place at the same time away from their day jobs. To me, that’s the main resource that is necessarily shared, in the sense of being something that can by its nature only happen 2-3 times per year, and is of community origin (rather than a CEA resource). I would take a fairly hard line against CEA actions that I judged to be an unfair or inappropriate grab at that resource. So while I would not impose the same duty you imply, I would assign a choice for CEA between that duty and a duty to run EAGs in an interoperable and cooperative manner.
Under that alternate duty, I would expect CEA to play nice with people and orgs who want to plan their own speakers and events that happen during the days of (or just before/after) the EAG. I would also expect CEA to take reasonable efforts to present its attendees with an option to opt-in to Swapcard with people who are not EAG attendees but are attending one of the other, non-CEA events. Failing to do these kinds of things would constitute a misuse of CEA’s dominant position that deprives other would-be actors the ability to tap into the collective community resource of co-location in space and time, and deprives the individual community members of free choice.
On the other hand, the alternative duty would not generally extend to deferring to the community’s preference on cause-area coverage for functions organized by CEA. Or to CEA’s decisions about who to provide travel grants or admittance to its own events. CEA choosing to de-emphasize cause area X in its own event planning, or employ a higher bar for travel grants for people working in cause area X, does not logically preclude the community from doing these things itself. To the extent the community finds it difficult to perform these functions (or delegate another org to do so), that would update me toward the natural-result narrative and away from viewing CEA as a delegate who primarily exercises the community’s power.
In contrast, my implied model for university groups is that the maximum healthy carrying capacity is usually one group per university due to a limited resource (student interest/attention) that is independent of CEA or any other org. Interoperability or co-existence is impractical, as the expected result would be failure of both/all groups from stretching the resource too thin. Moreover, starting a university group is within the operational capabilities of a number of actors (most non-EA student groups do not receive much in the way of external support, so the barriers to entry are pretty low). This raises the need for coordination among numerous potential actors. Under those circumstances, the empowerment/cooperation narrative is pretty convincing.
And many of the reasons I’m relatively more inclined to give CEA a freer hand on EAGs are lacking with the Forum. There are reasons a variety of conferences would be desirable (even if you want a single flagship conference), while the positive side of the ledger for multiple fora is more marginal. The speech on the Forum isn’t CEA’s own, so I’m not much less worried that expectations of community control of fora would reduce CEA’s incentives and ability to speak its own message. The examples of topics on which I would defer to CEA’s ability to use its own resources to pursue its own mission don’t have good analogues in the Forum context. There are many actors who could pull off running a central forum—the LW code could be forked, servers are fairly cheap, and the moderation lift would be manageable for a relatively small group of volunteers.
A thing you might not know is that I was on the founding team of the EA Global series (and was in charge of EA Global for roughly the first two years of its existence). This of course doesn’t mean I am right in my analysis here, but it does mean that I have a lot of detailed knowledge about the kind of community negotiations that were going on at the time.
I agree with a bunch of the arguments you made, but my sense is that when creating EA Global, CEA leaned heavily on its coordinating role within the community (which I think made sense).
Indeed, CEA took over the EA Summit from Leverage explicitly because both parties thought it was pretty important to have a centralized annual EA conference.
I didn’t know that, and adding in historical facts could definitely move me away from my starting point! For example, they could easily update me more toward thinking that (1) CEA would need to more explicitly disclaim intent to run the semi-official coordinating event, (2) it would need to provide some advance notice and a phase-out to allow other actors to stand up their own conferences that sought to fulfill a centralizing function; and (3) it would have a broader affirmative obligation to cooperate with any actor that wanted to stand up an alternative to EAG.
That’s fair, I didn’t really explain that footnote. Note the original point was in the context of cause prioritisation, and I should probably have linked to this previous comment from Jason which captured my feeling as well:
It seems possible, though far from obvious, that CEA’s funding base is so narrow it’s forced to focus on that target, in order to ensure the organisation’s survival from that direction. This was something I thought Zach covered nicely:
Thanks! That context is helpful.
While you raise a worthwhile point in that it probably would have been slightly better for this post to have a paragraph on ethical side constraints, I feel that the rest of this post is quite misguided (and that some points are likely due to an incomplete understanding of the top-level post)
CEA (and the EA movement as a whole) has been lacking in direction ever since Max stood down.
Having a clearly stated direction is an improvement in and of itself. It improves coordination and allows people to provide feedback on the direction of the community.
The shift in direction is that CEA is shifting further towards finding people who are (or could be) deeply committed to these principles and helping them deepen their understanding of them vs. shoveling as many people towards particular high-priority cause areas as possible.
The concretization of these principles is laid out in much more detail in resources that both of us are familiar with. There is no need for Zachary to have gone into more detail here because it is going the other way and pulling out general principles from specific discussion, norms and practices within the community.
The mission is obviously more important than us. That should be uncontroversial.
I suspect that more EA’s should dedicate their efforts to improving the health of the community and that this would increase the overall impact, but at the end of the day, the mission should come first[1].
In any case, counting up the number of activities CEA runs that achieve impact indirectly through the community is not particularly relevant to answering the question of whether CEA’s first duty is to the mission or the community.
It would have made sense for there to be a bit more discussion about ethical side-constraints, but including transparency in the list of core principles would honestly be just weird because transparency isn’t distinctly EA. Beyond that, the importance of transparency is significantly complicated by the concept of infohazards in areas like biohazards or AI safety. I really don’t see it as CEA’s role to take a side in these debates. I think it makes sense for CEA to embrace transparency as a key organisational value, but it’s not a core principle of EA in general and we should accept that different orgs will occupy different positions on the spectrum.
I’m not claiming that I’ve personally always lived up to this standard, but this should be the ideal.
Hey Chris :)
I’m not sure if you mean this question to be covered in the rest of your reply? If not, could you say concretely what you think I misunderstood? If so, I respectfully disagree that I misunderstood it:
Maybe I’m less familiar with the resources than you think? I know huge amounts have been written on these notions, but I know of nothing that would fix my problem of ‘I don’t see how stating these principles gives me any meaningful information about CEA’s future behaviour’.
I think that’s entirely consistent with what I’ve said. An organisation that aims to effect Y via X cannot afford to relegate X to an afterthought, or largely ignore the views of people strongly involved with X.
I’m concerned that ‘infohazards’ get invoked far too often, especially to deflect concerns about (non)transparency. In CEA’s case in particular, it doesn’t seem like they deal with biohazards or AI safety at a level necessitating high security, and even if they do have some kind of black ops program dealing with those things that they’re not telling us about, that isn’t the transparency I’m concerned about. Just a general commitment to sharing info guiding key decisions about the community with the community, such as
sharing forum changes they’re considering, and the case for/against them
making all their hires open, or giving clear reasons why when they don’t
describing what their prioritisation process actually is, inasmuch as it can be formalised, between e.g. longtermism and animal welfare/other considerations
when they’re considering buying mansions in the Oxford countryside/other controversial multimillion dollar calculations, publishing the cost-benefit calculation rather than merely asserting its existence
giving the breakdown of their funding sources
publishing the breakdown of their budget for EAGs
open-sourcing forum data (IIRC they might technically have done this? But with no documentation, and an API that you have to direct-link to)
avoiding behind-the-scenes work that they’d be embarrassed to have publicised (e.g. PELTIV scores)
generally cultivating a culture of directly engaging in discussion with the community more—eg. regular office hours, rather than highly intermittent AMAs, and in threads like this, sticking around for a discussion rather than posting a top-down announcement and then entirely ignoring the comments.
I work on the Forum team, but this comment only represents my personal views and not those of CEA. Also, I am responding to this comment in particular because it mentions the Forum by name. I may respond to other comments if I have time but no promises.
First off, I want to say thank you for your comment. I think the Forum serves as an important space for organizations to get feedback from the community and I’m happy that it’s doing so here. I will also say that I think writing clearly is hard, and I am not a particularly good writer, so I am happy to clarify if anything I say is unclear.
My understanding of the phrase “I view the community as CEA’s team, not its customers” is that CEA’s ultimate goal is to improve the world, and increasing the satisfaction of the EA community (or alternatively, satisfying any particular request an individual might have) is not the ultimate goal. I believe the purpose of laying this out is to be transparent and help readers understand and predict how CEA will act. My guess is that very often we will be improving the world by doing things that satisfy the EA community.
For the Forum in particular, user feedback is a vital input into how we prioritize our work. We gather this information via user interviews (such as at events, reaching out to specific groups of people while developing features, and broadly offering to do user interview calls with people like in my Forum profile), by including links to feedback forms when testing things out and launching new features, publishing posts and quick takes about our work, running various surveys including the annual Forum user survey, and even directly messaging users via the Forum to ask them questions. I genuinely believe that feedback is a gift, and I’m so grateful for people who take the time to provide it to us.
If you take one thing away from my comment, please remember that we love feedback—there are multiple ways to contact us listed here, including an anonymous option. You’re welcome to contact us with suggestions, questions, bug reports, feedback[1], etc. (I can only really speak for the Forum team, but I would guess other teams feel similarly.)
Earlier this year we implemented the ability to import Google Docs to the Forum and people gave us lots of positive feedback about that. I think most of the work on the Forum will be somewhere between “making the community happy” and “the community is mostly neutral, maybe a small subset are happy”—if you look at the features in our latest update post, I think basically all of them have been either requested by users or people have given us purely positive feedback on them[2]. One example of a change to the Forum that the EA community might have voted against is the big Forum redesign in 2023 - as you can see, we mostly got negative feedback about it. However, when I’ve interviewed users new to the site, I overwhelmingly get positive feedback about the design. It’s clear to me that having a skilled designer improve the site’s usability was the right choice.
This reflects how I view my own work—to do good by supporting the EA community, which does not always mean that we should do what they would vote for[3].
I think some of the disagreement is that people interpret the terms “team” and “customers” differently. In some ways we do treat Forum users as customers—for example, our engineers rotate being on-call to respond to customer service requests. We think this is worth their time because we feel that our users provide significant value for the world, not because our end goal is a high customer satisfaction score, but the result is basically the same. As I referenced earlier, our team functions similarly to other tech teams. So for example, when we are building a feature for group organizers, we will do many user interviews with group organizers. Thinking about my own experience as a customer, oftentimes websites will use dark patterns, compromise UX, prioritize engagement/addictiveness, and literally outright lie, all in order to maximize their profit. I am happy that we do not treat our users as customers in any of these ways. One slightly different way of thinking about “customer” is more like “customer service”, where an organization should strive to satisfy any individual who files a complaint. Honestly I think the Forum team is pretty good at this given our small size, but I would like us to be able to prioritize issues that users report relative to the value of our other potential work and not automatically file customer service reports in the highest priority bucket.
I like the term “team” because that emphasizes that we all broadly have the same goal (improving the world) and I am happy for Forum users to act in service of that goal (even if they criticize my work), in the same way that I appreciate when users give me feedback about the Forum in a way that reflects understanding of that shared goal (like, “I have this suggestion for you, though I’m guessing that this wouldn’t affect many people so it’s probably low priority”). In practice, much of the way that the Forum makes progress on that goal is by “empowering [people] to work for/fundraise for or otherwise support charities.” Another aspect of “team” I like is that this implies collaboration and transparency, since we have shared goals (so it would be against my interests to lie), whereas I think it’s entirely normal/expected for a company to mislead its customers[4]. “Team” means that we respect your time more than other websites (that treat you like customers) do, because we believe your time is valuable (for the world) and we want you to use it well, because we have shared goals. When someone answers my inactive user feedback form saying that they use the Forum less now because they are focused on doing good directly via their job, I don’t feel like I have “lost a customer”. I feel happy that they are presumably correctly valuing their time and doing more good (although I hope they still occasionally return to contribute back to the community).
A point that multiple commenters reference is about how CEA handles criticism. In my opinion, someone who is on the same team as you is much more likely to take your criticism seriously than any entity to which you are a customer. For example, if I complain to a company about their shady business practices, I expect them to completely ignore me or possibly lie to me, but certainly not to actually consider my point. If you complain to the Forum team about something we are doing that you consider morally dubious, we actually engage with it (at least internally—we have not always done as well as I would like at responding publicly, and I hope we improve on this in the future.)
Given this, I personally disagree that we “relegate the EA community to an afterthought” and that we “largely ignore the views of people strongly involved with EA”, and I disagree that we implied that we plan to do these things in the future. In my opinion, viewing the EA community as CEA’s “team” does not preclude us from caring about our effect on the community, nor does it mean that we no longer want to nurture and support the community, nor does it imply that we will ignore criticism, nor does it mean that we don’t care about people’s opinion of our work. I would go so far as to say those are more important for a teammate to care about than a company to care about.
I believe the purpose of Zach’s post was to explain that CEA will focus on EA principles rather than specific cause areas, and that it was not meant to communicate anything about CEA’s principles as an organization. Personally I am quite pro-transparency and hope to post more about my work than has been the case in the past.
To respond to some specific points:
sharing forum changes they’re considering, and the case for/against them
We have done this for some projects in the past, such as when adding emoji reactions.
I’m happy to do more of this myself. Some reasons that I do not prioritize this:
Lack of demand (I appreciate you sharing what you would like to see from us! It’s hard to know what is worth us writing about otherwise. For example, it’s not clear to me if anyone got any value out of this data-sharing post and it took me a fair amount of time to put it all together.)
I believe that I have a bias towards thinking that the Forum is valuable/important, and so I try to counter that in various ways. In this case, because I care a lot about the Forum respecting people’s time, I want to push back on assuming that Forum-related questions are valuable/important enough to be worth their attention. We just ran a Forum user survey which was quite long—I spent a long time iterating on the text/questions and cutting things down, and in the end I was still pretty worried about asking for too much time. As a tech team we already prioritize work based on user feedback, so additional feedback gathered from a public post will also have diminishing returns.
Smaller things, like the fact that I’m quite busy and am a slow writer, and I find publishing things on the Forum pretty scary.
We shared a public version of our half-quarter OKR planning doc in our Forum update post. That doc gets updated right after we finalize our OKRs, and is currently the closest thing to this that exists.
open-sourcing forum data (IIRC they might technically have done this? But with no documentation, and an API that you have to direct-link to)
Our codebase is open source, and I personally think the documentation is quite good. We use GraphQL which is a commonly used technology. If you have questions about accessing data, feel free to contact us.
generally cultivating a culture of directly engaging in discussion with the community more
To this end, I will publicly suggest that if you have any questions for CEA, you should feel free to contact us.
Including critical feedback! Every time I talk to a user I emphasize that critical feedback is especially useful for us, because people are biased towards saying nice things to us (at least to our face—I think this is less the case online).
I actually don’t know of any particular requests or feedback after the fact that we got about site performance improvements, but I am confident that it was worth doing. Improving site speed is one of the most evidence-based ways for a site to decrease their bounce rate and improve their SEO ranking. This type of issue, which either minorly inconveniences many people or disproportionately impacts people who are not Forum users but would have been, is hard to justify working on purely based on the goal of “community satisfaction”, but makes more sense under the goal of “improving the world”.
Not that customers normally get to unilaterally decide on what a company does via a vote.
To be clear, I think any organization has incentives against being 100% transparent, and I don’t think CEA is at the ideal level of transparency. But when I compare my time working in for-profit companies to my time working at CEA, it’s pretty stark how much more the people at CEA care about communicating honestly. For example, in a previous for-profit company, I was asked to obfuscate payment-related changes to prevent customers from unsubscribing, and no one around me had any objection to this.
Thanks for sharing your experience of working on the Forum Sarah. It’s good to hear that your internal experience of the Forum team is that it sees feedback as vital.
I hope the below can help with understanding the type of thing which can contribute to an opposing external impression. Perhaps some types of feedback get more response than others?
AFAICT I have done this twice, once asking a yes/no question about unclear forum policy and once about a Forum team post I considered mildly misleading. The first got no response, the other got a response which was inaccurate, which was unfortunate, though I certainly assume it was unintentionally so.
I want to be clear that I do not think I am entitled to get a response. I think the Forum team is entitled to decide it should focus on analytics not individuals, for example. I basically thought it had, and so mentally wrote off those pathways. But your comment paints a surprisingly different picture and repeatedly pushes these options, so it didn’t feel right to say that I disagree without disclosing a big part of why I disagree.
Looking to public, and frankly far more important, examples of this, the top comment on CEA’s last fundraising attempt is highly critical of the Forum / Online team’s direction and spend. At time of writing the comment has 23⁄2 agree/disagree votes and more karma than the top level post it’s under. This seems like the kind of thing one prioritises responding to if trying to engage, and 10 months ago Ben West responded “I mostly want to delay a discussion about this until the post fully dedicated to the Forum”. That post never came out[1]. So again my takeaway was that the Forum team didn’t value such engagement.
As someone who directionally agrees with the quoted sentiments, this was helpful in clarifying part of what’s going on here. I personally think that CEA has been opaque for the last few years, for better or for worse[2]. Others I have heard from think the same [3]. So I naturally interpret a post which is essentially a statement of continuinty as a plan to continue down this road. Arepo makes a similar point in the 2nd paragraph of their first comment. But if you think CEA, or at least your team, has been responsive in the past, the same statement of continuity is not naturally interpreted that way.
To the best of my knowledge. If it did, please link to it as a response to the comment! This type of thing is hard to search for, but I did spend ~5 minutes trying.
Since I’ve pushed CEA to be more responsive here and elsewhere, I want to note that distance is helpful in some contexts. I am unsurprised to hear that the Forum redesign in 2023 got negative feedback from entrenched users but positive feedback from new users, for example; seems a common pattern with design changes.
Long comment, so pulling out the relevant quote:
(Again: only speaking for myself, and here in particular I will avoid speaking about or for other people at CEA when possible.)
Yup, I think it’s very reasonable for people outside of CEA to have a different impression than I do. I certainly don’t fault anyone for that. Hopefully hearing my perspective was helpful.
I’m really sorry that our team didn’t properly respond to your messages. There are many factors that could affect whether or not any particular message got a response. We currently have a team assistant who has significantly improved how we manage incoming messages, so if you sent yours before she joined, I would guess someone dropped it by accident. As an engineer I know I have not always lived up to my own standards in terms of responding in a timely manner and I do feel bad about that. While I still think we do pretty good for our small size, I’m guessing that overall we are not at where I would personally like for us to be.
Hmm I currently don’t recall any post about Forum fundraising. I think we considered fundraising for the Forum, but I don’t remember if any significant progress was made in developing that idea. In my opinion, Ben and Oscar wrote multiple detailed replies to that comment, though I am sympathetic to the take that they did not quite respond to Nuno’s central point. I think this is just a case of, things sometimes fall through the cracks, especially during times of high uncertainty as was the case in this example. I feel optimistic that, with more stability and the ability to plan for longer futures, CEA will do better.
I also want to differentiate between public and internal engagement. I read Nuno’s writing and discussed it with my colleagues. At the time I didn’t necessarily think I would have better answers than Ben so I didn’t feel the need to join the public conversation, but at this point I probably do have better answers. I’ll just broadly say that, I agree that marginal value is what matters, as do others on my team. We do analyze the marginal impact of our Forum work. I would be excited to write more about it publicly but it will take a fair amount of work to make it clear and comprehensible for the Forum audience (up to my personal standards). Interestingly, Nuno’s points push me against taking the time to communicate publicly / be more open. Every hour I spend on writing a comment (and it can take me hours—I am not particularly good at writing, my training is in software engineering) is an hour that I don’t know how to value in the marginal impact analysis, so it defaults to being worth $0[1]. I strongly feel responsible for using EA/charitable money well, so using my work time to do something that I ultimately won’t put any value on is difficult.
I don’t disagree with this. I personally would prefer that we had communicated publicly more in the past, and I think ideally CEA would be more open about our work.
I’ll just note that the point of this post was not to lay out all of CEA’s upcoming plans, nor explain how CEA will change, nor even to talk about CEA’s organizational values or principles. I believe Zach has more posts planned, but he is also very busy.
Apologies—to clarify, I don’t think I said that CEA or my team has been responsive in the past. I’m guessing that on average CEA and my team have been below my personal bar. I feel that the Forum team aims to be responsive, and it is good to continue to have that goal, and to continue to do better relative to that goal (such as by getting help from our team assistant). My dissertation about “team”, similarly, doesn’t mean that we have been great about following through on all the ideals that “team” implies. I just think that it is an accurate description of our goals, and what I personally aspire to do. Based on Zach’s comment, I’m optimistic that CEA will do better.
I’m open to suggestions here. Perhaps transparency can be modeled as worth a fraction of the overall value CEA (or the Online Team, or the Forum) produces? But surely there are diminishing returns at some point—I would be surprised if I should be spending 50% of my work time on activities that are primarily valued via “transparency”. I’m worried that this is so subjective that I would just use it to justify spending as much time as I would like on these activities. If I was allowed to ignore cost effectiveness I would naturally be more open.
Thanks for taking the time to respond.
I think we’re pretty close to agreement, so I’ll leave it here except to clarify that when I’ve talked about engaging/engagement I mean something close to ‘public engagement’; responses that the person who raised the issue sees or could reasonably be expected to see. So what you’re doing here, Zach elsewhere in the comments, etc.
CEA discussing internally is also valuable of course, and is a type of engagement, but is not what I was trying to point at. Sorry for any confusion, and thanks for differentiating.
Huh? That wasn’t CEAs decision, they just fiscally sponsored Wytham
IIRC it was done under the name ‘CEA’ when that name covered both the current org and what is now ‘Effective Ventures’. It was done at the impetus of a trustee of CEA-EV who, since they were the same legal entity, was also a trustee of CEA-CEA (I believe it’s still true that they’re currently the same organisation, CEA-CEA’s plans to spin off notwithstanding). I can’t find the initial announcement from CEA, but the justification was to host EA events and conferences there. Since by far the primary EA-event-and-conference-hosting organisation is CEA-CEA, it seems likely they were the primary beneficiary of the purchase.
I’m not really sure whether this technically qualifies as ‘only fiscally sponsoring Wytham’ (I doubt there’s a simple yes-no answer to the question), but there’s clearly a lot of entanglement with the organisation and people who a) are supposed to represent the EA community and b) benefited from the project. Even/especally if this entanglement is all perfectly innocent and well thought through, greater transparency would have made that more obvious and prevented much of the consequent muckraking of the movement by its critics.
I think it’s super reasonable for people to be confused about this. EV is a ridiculously confusing entity (or rather, set of entities), even without the name change and overlapping names.
I wouldn’t consider Wytham to have ever been a part of the project that’s currently known as CEA. A potential litmus test I’d use is “Was Wytham ever under the control of CEA’s Executive Director?” To the best of my knowledge, the answer is no, though there’s a chance I’m missing some historical context.
This comment also discusses this distinction further.
I’m nigh-certain that Wytham was never under the control of CEA’s Executive Director.
I think that this litmus test is pretty weak, though, as a response to Arepo’s suggestion that CEA was the primary beneficiary of Wytham. However, I also think that this suggestion is mistaken. I believe that CEA hosted <10% of the events at Wytham (maybe significantly less; I don’t know precisely, and am giving 10% as a round threshold that I’m relatively confident using as an upper bound).
Agreed.
Regarding some of the specific points you’ve made:
• I agree that it would be great to get the community more involved in thinking through what the forum should look like.
• Wytham Abbey was an independently run project that they just fiscally sponsored.
• I agree that funding sources should be public (although perhaps not individual donations below a certain amount).
• Unsurprised PELTIV backfired.
• I would love to see regular community office hours, though if these end up seeing low demand, or it’s just the same folks over and over, I think it would be reasonable for them to decide to discontinue this.
Regarding some of the other things, I honestly don’t see them as the highest priority, especially right now.
I wouldn’t say they’re all top priority right now either fwiw. What I’d like is some kind of public commitment to stuff like this as at least nice-to-haves, rather than something they seem to feel no obligation about at all. That’s all any of these ‘principles’ can be—a directional statement about culture. But CEA has been around for over a decade, with an average annual budget that must be well into the millions, so even ‘not top priority’ concerns could easily have been long since addressed if they’d had a historical interest in doing so.
I’m not sure I agree with that characterisation of Wytham Abbey. It was orchestrated by one of the trustees of the org on behalf of the org, with intended beneficiaries being more or less a subset of the org’s proxy beneficiaries. And this was done under their current moniker, which per agb/Jason’s comment elsewhere in this discussion, is highly misleading—especially when they’re involved in projects like this. Consequently, when Wytham Abbey became a PR disaster, it helped bring the whole movement into disrepute. Arguably the main lesson was just ‘don’t use the public face of EA for black box projects’, but I think the backup lesson was ‘if you do, at least show enough of your working to prove to reasonable critical observers that it isn’t a backdoor way of giving the trustees a summer home.’
I guess I want CEA to focus very heavily on figuring out their overall strategy, including community engagement and then communicating their overall decisions.
Conference cost breakdowns feels like an unnecessary distraction at this point, so long as they satisfy the auditor.
I agree that absolute transparency is not ideal. That said, there is a version of transparency (i.e ‘reasoning transparency’) that is a somewhat distinct EA value.
That would make more sense.
There’s a distinction between what an organization wants to achieve and how it wants to achieve it. The principles described in the original post are related to the what. They help us identify a set of shared beliefs that define the community we want to cultivate.
I think there’s plenty of room for disagreement and variation over how we cultivate that community. Even as CEA’s mission remains the same, I expect the approach we’ll use to achieve that mission will vary. It’s possible to remain committed to these principles while also continuing to find ways to improve CEA’s effectiveness.
I view transparency as part of the how, i.e. I believe transparency can be a tool to achieve goals informed by EA principles, but I don’t think it’s a goal in itself. Looking at the spectrum of approaches EA organizations take to doing good, I’m glad that there’s room in our community for a diversity of approaches. I think transparency is a good example of a value where organizations can and should commit to it at different levels to achieve goals inspired by EA principles, and as a result I don’t think it’s a principle that defines the community.
For example, I think it’s highly valuable for GiveWell to have a commitment to transparency in order for them to be able to raise funds and increase trust in their charity evaluations, but I think transparency may cause active harm for impactful projects involving private political negotiations or infohazards in biosecurity. Transparency is also not costless, e.g. Open Philanthropy has repeatedly published pieces on the challenges of transparency. I think it’s reasonable for different individuals and organizations in the EA community to have different standards for transparency, and I’m happy for CEA to support others in their approach to doing good at a variety of points along that transparency spectrum.
When it comes to CEA, I think CEA would ideally be more transparent and communicating with the community more, though I also don’t think it makes sense for us to have a universal commitment to transparency such that I would elevate it to a “core principle.” I think different parts of our work deserve different levels of transparency. For example:
I think CEA should communicate about programmatic goals, impacts, and major decisions, which we’ve done before (see e.g. here)—but I think we would ideally be doing more.
On the other end of the spectrum, there are some places where confidentiality seems like an obvious good to me, e.g. with some information that is shared with our Community Health Team. I don’t expect this will be a novel idea for most readers, but I think it’s useful to illustrate that even for CEA, transparency isn’t an unabated good.
Somewhere in between is something like the EAG admissions bar. We do share significant amounts of information about admissions, but as Amy Labenz (our Head of Events) has stated, we want to avoid situations where we share so much information that people can use it to game the admissions process. I think it’s worth us potentially investing more in similar meta-transparency around where we will and won’t expect to share information. I suspect the lack of total transparency will upset some members of the community (particularly those who aren’t admitted to our events), but I think the tradeoffs are plausibly worth it.
I feel quite strongly that these principles go beyond applause lights and are substantively important to EA. Instead of going into depth on all of the principles, I’ll point out that many others have spent effort articulating the principles and their value, e.g. here, here, and here.
To briefly engage with some of the points in your comment and explain how I see these principles holding value:
Impartiality and scope sensitivity can exist independently of each other. Many contemporary approaches to philanthropy are highly data-driven and seek to have more impact, but they aren’t impartial with respect to their beneficiaries. As an example, the Gates Foundation’s US education program strikes me as an approach that is likely to be scope-sensitive without being impartial. They’re highly data-driven and want to improve US education as much as possible, but it seems likely to me that their focus on the US education as opposed to e.g. educational programs in Nigeria stems from Gates being in the US rather than an impartial consideration of all potential beneficiaries of their philanthropy.
I also think it’s possible to have impartiality without scope sensitivity. Animal shelters and animal sanctuaries strike me as efforts that reflect impartiality insofar as they value the wellbeing of a wide array of species, but they don’t try to account for scope sensitivity (e.g. corporate campaigns are likely to improve the lives of orders of magnitude more animals per dollar).
I agree that a scout mindset and recognition of tradeoffs are important tools for doing counterfactually large amounts of good. I also think they’re still wildly underutilized by the rest of the world. Stefan Schubert’s claim that the triviality objection is beside the point resonates with me. The goal of these principles isn’t to be surprising, but rather to be action-guiding and effective at inspiring us to better help others.
I think it’s important to view the quote from the original post in the context of the following sentence: “While we often strive to collaborate and to support people in their engagement with EA, our primary goal is having a positive impact on the world, not satisfying community members (though oftentimes the two are intertwined).” I believe the goals of engaged community members and CEA are very frequently aligned, because I believe most community members strive to have a positive impact on the world. With that being said, if and when having a positive impact on the world and satisfying community members does come apart, we want to keep our focus on the broader mission.
I worry some from the comments in response to this post that people are concerned we won’t listen to or communicate with the community. My take is that as “teammates,” we actually want to listen quite closely to the community and have a two-way dialogue on how we can achieve these goals. With that being said, based on the confusion in the comments, I think it may be worth putting the analogy around “teammates” and “customers” aside for the moment. Instead, let me say some concrete things about how CEA approaches engagement with the community:
I believe the majority of CEA’s impact flows through the community. In recent years, our decision making has placed the most emphasis on metrics around the number of positive career changes people have made as a result of our programs. We think the community has valuable input to give us on how we can help them help others, and we use their input to drive decisions. We frequently solicit feedback for this purpose, e.g. via our recent forum survey, or the surveys we run after most of our events.
The ultimate beneficiaries of our work are groups like children who would otherwise die from malaria, chickens who would otherwise suffer in cages, and people who might die or not exist due to existential catastrophes. I think these are populations that the vast majority of the EA community is concerned about as well. I see us as collaborating to achieve these goals, and I think CEA is best poised to achieve them by empowering people who share core EA principles.
While I think most people in EA would agree with the above goals, I do think at times that meta organizations have erred too far in the direction of trying to optimize for community satisfaction. I think this was particularly true during the FTX boom times, when significant amounts of money were spent in ways that, to my eyes, blurred the lines between helping the community do more good and just plain helping the community. See e.g. these posts for some historical discussion.
Concretely, this affects how we evaluate CEA’s impact. For example, for events, our primary focus is on metrics like how many positive career changes occur as a result of our events, as opposed to attendee satisfaction. We do collect data on the latter and treat it as a useful input for our decision-making. Among other reasons, we believe it’s helpful because we think one of the things that satisfies many community members is when we help them improve their impact! But it’s an input, not the thing we’re optimizing for. We have made decisions that may make our events less of a pleasant experience (e.g. cutting back on meals and snack variety) but ultimately think that we can use these funds better elsewhere or that our donors can instead not give to CEA and redirect the funding to beneficiaries that both they and we care about.
Sometimes, approaches to serving different parts of the community are in tension with each other. To return to EAG admissions, I think Eli Nathan does a good job in this comment discussing how we both incorporate stakeholder feedback but don’t optimize for making the community happy. Sometimes we have to make tough decisions on tradeoffs between how we support different parts of the community, and we’ll use a mix of community input and our own judgment when doing so.
I think if anyone was best able to make a claim to be our customers, it would be our donors. Accountability to the intent behind their donations does drive our decision-making, as I discussed in the OP. I think it’s also important to note that I don’t perceive this to be a change from CEA’s historical practices (if anything, I think this dynamic has become less pronounced with recent changes at Open Philanthropy and CEA, although I still am very unsure how it will shake out in the long run).
I still want us to invest more in communicating with the community. I suspect you and I have different takes on what we feel like the optimal level of communication and transparency is, but I do agree that CEA should directionally be communicating more. Our main bottleneck to doing so right now is bandwidth, not desire. (We’re exploring ways to reduce that bottleneck but don’t want to make promises.) I think it’s a good thing when we engage more, and I’m supportive of efforts from our team to do so, whether that’s through proactive posts from us or engaging with community critiques. The desire to be transparent was one of the original inspirations for doing this principles-first post.
I think the principles-first approach is good at recognizing the diversity of perspectives in our community and supporting individual community members in their own journey to do good. We regularly have forum posts, event attendees and speakers, and group members whose cause prioritization reflects choices I disagree with. I think that’s good!
I understand the primary concern posed in this comment to be more about balancing the views of donors, staff, and the community about having a positive impact on the world, rather than trading off between altruism and community self-interest. To my ears, some phrases in the following discussion make it sound like the community’s concerns are primarily self-interested: “trying to optimize for community satisfaction,” “just plain helping the community,” “make our events less of a pleasant experience (e.g. cutting back on meals and snack variety),” “don’t optimize for making the community happy” for EAG admissions).
I don’t doubt that y’all get a fair number of seemingly self-interested complaints from not-satisfied community members, of course! But I think modeling the community’s concerns here as self-interested would be closer to a strawman than a steelman approach.
CEA receives many fewer resources from its donors than from the community. Again, CEA would not really have a job without the community. An organization like CEA would totally exist without your big donors (like, the basic institution of having an “EA leadership organization” requires a few hundred k per year, which you would be able to easily fundraise from a very small fraction of the community, and even at the current CEA burn-rate the labor-value of the people who are substantially directing their life based on the broader EA community vastly eclipses the donations to CEA).
Your donors seem obviously much less important of a stakeholder than the community which is investing you with the authority to lead.
Hi Zachary,
First off, I want to thank you for taking what was obviously a substantial amount of time to reply (and also to Sarah in another comment that I haven’t had time to reply to). This is, fwiw, is already well above the level of community engagement that I’ve perceived from most previous heads of CEA.
On your specific comments, it’s possible that we agree more than I expected. Nonetheless, there are still some substantial concerns they raise for me. In typical Crocker-y fashion, I hope you’ll appreciate that me focusing on the disagreements for the rest of this comment doesn’t imply that they’re my entire impression. Should you think about replying to this, know that I appreciate your time, and I hope you feel able to reply to individual points without being morally compelled to respond to the whole thing. I’m giving my concerns here as much for your and the community’s information as with the hope of a further response.
> I view transparency as part of the how, i.e. I believe transparency can be a tool to achieve goals informed by EA principles, but I don’t think it’s a goal in itself.
In some sense this is obviously true, but I believe it’s gerrymandering what the difference between ‘what’ and ‘how’ actually is.
For example, to my mind ‘scout mindset’ doesn’t seem any more central a goal than ‘be transparent’. In the post by Peter you linked, his definition of it sounds remarkably like ‘be transparent’, to wit: ‘the view that we should be open, collaborative, and truth-seeking in our understanding of what to do’.
One can imagine a world where we should rationally stop exploring new ideas and just make the best of the information we have (this isn’t so hard to imagine if it’s understood as a temporary measure to firefight urgent siutations), and where major charities can make substantial decisions without explanation and this tend to produce trustworthy and trusted policies—but I don’t think we live in either world most of the time.
In the actual world, the community doesn’t really know, for example with what weighting CEA priorities longtermist causes over others; how it priorities AI vs other longtermist causes, how it runs admissions at EAGs,;why some posts get tagged as ‘community’ on the forum, and therefore effectively suppressed while similar ones stay at the top level; why the ‘community’ tag has been made admin-editable-only; what the region pro rata rates CEA uses when contracting externally; what your funding breakdown looks like (or even the absolute amount); what the inclusion criteria for ‘leadership’ forums is, or who the attendees are; or many many other such questions people in the community have urgently raised. And we don’t have any regular venue for being able to discuss such questions and community-facing CEA policies and metrics with some non-negligible chance of CEA responding—a simple weekly office hours policy could fix this.
> confidentiality seems like an obvious good to me, e.g. with some information that is shared with our Community Health Team
Confidentiality is largely unrelated to transparency. If in any context someone speaks to someone else in confidence, there have to be exceptionally good reasons for breaking that confidence. None of what I’m pointing at in the previous paragraph would come close to asking them to do that.
> Amy Labenz (our Head of Events) has stated, we want to avoid situations where we share so much information that people can use it to game the admissions process.
I think this statement was part of the problem… We as a community have no information on which to evaluate the statement, and no particular reason to take it at face value. Are there concrete examples of people gaming the system this way? Is there empirical data showing some patterns that justify this assertion (and comparing it to the upsides)? I know experienced EA event organisers who explicitly claim she’s wrong on this. As presented, Labenz’s statement is in itself a further example of lack of transparency that seems not to serve the community—it’s a proclamation from above, with no follow-up, on a topic that the EA community would actively like to help out with if we were given sufficient data.
This raises a more general point—transparency doesn’t just allow the community to criticise CEA, but enables individuals and other orgs to actively help find useful info in the data that CEA otherwise wouldn’t have had the bandwidth to uncover.
> I think transparency may cause active harm for impactful projects involving private political negotiations or infohazards in biosecurity
These scenarios get wheeled out repeatedly for this sort of discussion (Chris Leong basically used the same ones elsewhere in this thread), but I find them somewhat disingenuous. For most charities, including all core-to-the-community EA charities, this is not a concern. I certainly hope CEA doesn’t deal in biosecurity or international politics—if it does, then the lack of transparency is much worse than I thought!
> Transparency is also not costless, e.g. Open Philanthropy has repeatedly published pieces on the challenges of transparency
All of the concerns they list there apply equally to all the charities that Givewell, EAFunds etc expect to be transparent. I see no principled reason in that article why CEA, OP, EA Funds, GWWC or any other regranters should expect so much more transparency than they’re willing to offer themselves. Briefly going through their three key arguments:
‘Challenge 1: protecting our brand’ - empirically I think this is something CEA and EV have substantially failed to do in the last few years. And in most of the major cases (continual failure for anyone to admit any responsibility for FTX; confusion around Wytham Abbey—the fact that that was ‘other CEA’ notwithstanding; PELTIV scores and other elitism-favouring policies; the community health team not disclosing allegations against Owen (or more politic-ly ‘a key member of our organisation’) sooner; etc) this was explicitly bad feeling over lack of transparency. I think publishing somee half-baked explanations that summarised the actual thinking of these at the time (rather than when in response to them later being exposed by critics) would a) have given people far less to complain about, and b) possibly generated (kinder) pushback from the community that might have averted some of the problem as it eventually manifested. I have also argued that CEA’s historical media policy of ‘talk as little as possible to the media’ both left a void in media discussion of the movement that was filled by the most vociferous critics and generally worsened the epistemics of the movement.
‘Challenge 2: information about us is information about grantees’ - this mostly doesn’t apply to CEA. Your grantees are the community and community orgs, both groups of whom would almost certainly like more info from you. (it also does apply to nonmeta charities like Givedirectly, who we nonetheless expect to gather large amounts of info on the community they’re serving—but in that situation we think it’s a good tradeoff)
‘Challenge 3: transparency is unusual’ - this seems more like a whinge than a real objection. Yes, it’s a higher standard than the average nonprofit holds itself to. The whole point of the EA movement was to encourage higher standards in the world. If we can’t hold ourselves to those raised standards, it’s hard to have much hope that we’ll ever inspire meaningful change in others.
> I also think it’s possible to have impartiality without scope sensitivity. Animal shelters and animal sanctuaries strike me as efforts that reflect impartiality insofar as they value the wellbeing of a wide array of species, but they don’t try to account for scope sensitivity
This may be quibbling, but I would consider focusing on visible subsets of the animal population (esp pets) a form of partiality. This particular disagreement doesn’t matter much, but it illustrates why I think gestures towards principles that are really not that well defined is that helpful for giving a sense of what we can expect CEA to do in future.
> “While we often strive to collaborate and to support people in their engagement with EA, our primary goal is having a positive impact on the world, not satisfying community members (though oftentimes the two are intertwined).”
I think this politicianspeak. If AMF said ‘our primary goal is having a positive impact on the world rather than distributing bednets’ and used that as a rationale to remove their hyperfocus on bednets, I’m confident a) that they would become much less positive on the world, and b) that Givewell would stop recommending them for that reason. Taking a risk on choosing your focus and core competencies is essential to actually doing something useful—if you later find out that your core competencies aren’t that valuable then you can either disband the organisation, or attempt a radical pivot (as Charity Science’s founders did on multiple occasions!).
> I think this was particularly true during the FTX boom times, when significant amounts of money were spent in ways that, to my eyes, blurred the lines between helping the community do more good and just plain helping the community. See e.g. these posts for some historical discussion … We have made decisions that may make our events less of a pleasant experience (e.g. cutting back on meals and snack variety)
I think this along with the transparency question is our biggest disagreement and/or misunderstanding. There’s a major equivocation going on here between exactly *which* members of the community you’re serving. I am entirely in favour of cutting costs at EAGs (the free wine at one I went to tasted distinctly of dead children), and of reducing all-expenses-paid forums for ‘people leading EA community-building’. I want to see CEA support people who actually need support to do good—the low-level community builders with little to no career development, esp in low or middle income countries whose communities are being starved; the small organisations with good track records but such mercurial funding; all the talented people who didn’t go to top 100 universities and therefore get systemically deprioritised by CEA. These people were never major beneficiaries of the boom, but were given false expectations during it and have been struggling in the general pullback ever since.
> For example, for events, our primary focus is on metrics like how many positive career changes occur as a result of our events, as opposed to attendee satisfaction.
I think the focus would be better placed on why attendees are satisfied or dissatisfied. If I go to an event and feel motivated to work harder in what I’m already doing, or build a social network who make me feel better enough about my life that I counterfactually make or keep a pledge, these things are equally as important. There’s something very patriarchal about CEA assuming they know better what makes members of the community more effective than the members of the community do. And, as any metric, ‘positive career changes’ can be gamed, or could just be the wrong thing to focus on.
> I think if anyone was best able to make a claim to be our customers, it would be our donors. Accountability to the intent behind their donations does drive our decision-making, as I discussed in the OP.
If both CEA and its donors are effectiveness-minded, this shouldn’t really be a distinction—per my comments about focus above, serving CEA’s community is about the most effective thing an org with a community focus can do, and so one would hope the donors would favour it. But also, this argument would be stronger if CEA only took money from major donors. As is, as long as CEA accepts donations from the community, sometimes actively solicits it, and broadly requires it (subject to honesty policy) from people attending EAGs—then your donors are the community and hence, either way, your customers.
(I work on the Forum but I am only speaking for myself.)
To respond to some bits related to the Forum:
If you’re referring to “why” as in, what criteria is used for determining when to tag a post as “Community”, that is listed in the Community topic page. If you’re referring to “why” as in, how does that judgement happen, this is done by either the post author or a Forum Facilitator (as described here).
We provided a brief explanation in this Forum update post. The gist is that we would like to prevent misuse (i.e. people applying it to posts because they wanted to move them down, or people removing it from posts because they wanted to move them up).
Thank you for flagging your interest in this information! In general we don’t publicly post about every small technical change we make on the Forum, as it’s hard to know what people are interested in reading about. If you have additional questions about the Forum, please feel free to contact us.
In general, our codebase is open source so you’re welcome to look at our PRs descriptions. It’s true that those can be sparse sometimes — feel free to comment on the PR if you have questions about it.
If you have questions for the Forum team, you’re welcome to contact us at any time. I know that we have not been perfect at responding but we do care about being responsive and do try to improve. You can DM me directly if you don’t get a response; I am happy to answer questions about the Forum. I also attend multiple EAG(x) conferences each year and am generally easy to talk to there—I take a shift at the CEA org fair booth (if I am not too busy volunteering), and fill my 1:1 slots with user interviews asking people for feedback on the Forum. I think most people are excited for others to show an interest in their work, and that applies to me as well! :)
I personally disagree that it would be better for CEA to have a goal that includes a specific solution to their overarching goal. I think it is often the case that it’s better to focus on outcomes rather than specific solutions. In the specific case of the Forum team, having an overarching goal that is about having a positive impact means that we feel free to do work that is unrelated to the Forum if we believe that it will be impactful. This can take the shape of, for example, a month-long technical project for another organization that has no tech team. I think if our goal were more like “have a positive impact by improving the EA Forum” that would be severely limiting.
I also personally disagree that this is “politicianspeak”, in the sense that I believe the quoted text is accurate, will help you predict our future actions, and highlights a meaningful distinction. I’ll refer back to an example from my other long comment: when we released the big Forum redesign, the feedback from the community was mostly negative, and yet I believe it was the right thing to do from an impact perspective (as it gave the site a better UX for new users). I think there are very few examples of us making a change to the Forum that the community overall disagrees with, but I think it is both more accurate for us to say that “our primary goal is having a positive impact on the world”, and better for the world that that is our primary goal (rather than “community satisfaction”).
This doesn’t sound right to me. If you want to focus on the customer analogy, the funders are paying CEA to provide impact according to their impact metrics. CEA engages with a subset of the EA community that they think will lead to effects that they think will lead to impact according to their own theory of change and/or the ToC of the funder(s). Target groups can differ based on the ToC of project, so you see people engaging on the forum but being rejected from EAGs.
I think there is much room for criticism when looking more closely at the ToCs, which is more to your next point:
Both Givewell and GWWC want to shift donation money to effective charities, which is why they have to make a compelling case for donors. Transparency seems to be a good tool for this. The analogy here would be CEA making the case for them to get funded for their work. Zach has written a bit about how they engage with funders.
I personally think there is a good case to be made to try for broader meta-funding diversification, which would necessitate more transparency around impact measurement. The EA Meta Funding Landscape Report asks some good questions. However, I can also see that the EV of this might be lower than that of engaging with a smaller set of funders. Transparency and engaging with a broad audience can be pretty time-consuming and thus lower the cost-effectiveness of your approach.
(All opinions are my own and don’t reflect those of the organisations I’m affiliated with.)
Right, the community isn’t the ultimate beneficiary of CEA’s work. It’s roughly analogous to donors who receive GiveWell advice—the ToC works instrumentally through the community/GW donors but impact is derived from positive effects on ultimate beneficiaries (generally children in Africa). Somewhat analogously, an object-level org creates impact through its employees, but employees are not beneficiaries of the org.
That undermines the first motivation for I gave for transparency, but I don’t think it really touches on the other four. And as you say, it only undermines the first to the extent that we don’t think it would be better that they get more diverse funding.
I think if only for feedback-loop reasons, it would be far better for CEA to get more from the community—if they’re struggling to do so, that could be considered an important form of feedback in itself.
I feel like this proves too much. Givewell’s potential donors could make exactly the same claim, but Givewell repeatedly reinforced their belief that greater transparency is necessary to have high credence that the organisation in question is doing a good job. The fact that CEA’s outputs are less concrete/measurable/directly tied to human welfare if anything makes me think it’s more important that feedback loops are tightened than for Givewell evaluands.
The “team” metaphor is ambiguous, and I think an accurate interpretation of it doesn’t answer many questions.
The community isn’t the team in the sense that CEA is the manager. The only plausible rationale for that would be a mandate from the community, and I think we can exclude that based on the community not being the “customers.”
Thus, CEA seems to be in a leaderless co-worker type relationship, or a leaderless sports team co-member relationship, with other EAs and EA orgs. That’s a loose sort of team, and often an ineffective one [add sports metaphor appropriate to your culture here as mine would be US-centric.] For those sorts of teams to be effective, there generally has to be a lot of give and take from a position of rough equality.
There are also “teams” where everyone kinda does their own thing with relatively little coordination. I’m thinking somewhat of toddlers engaged in mostly parallel play rather than truly playing together. A valid model, but they are unlikely to build a really cool tower of blocks that way!
This poses some interesting questions, and I’ve thought about them a bit, although I’m still a bit confused.
Let’s start with the definition on effectivealtriusm.org, which seems broadly reasonable:
So what EA does is:
find the best ways to help others
put them into practice
So, basically, we are a company with a department that builds solar panels and another that runs photovoltaic power stations using these panels. Both are related but distinct. If the solar panels are faulty, this will affect the power station, but if the power station is built by cutting down primal forest, then the solar panel division is not at fault. Still, it will affect the reputation of the whole organisation, which will affect the solar engineers.
But going back to the points, we could add some questions:
find the best ways to help others
How do we find the best ways to help?
Who are the others?
put them into practice
How do we put them into practice?
1.a seems pretty straightforward: If we have different groups working on this, then the less biased ones (using a scout mindset and being scope sensitive) and the ones using decision-making theories that recognize trade-offs and counterfactuals will fare better. Here, the principles logically follow from the requirements. If you want to make the best solar cells, you’ll have to understand the science behind them.
1.b Here, we can see that EA is based on the value of impartiality, but it is not a prerequisite for a group that wants to do good better. If I want to do the most good for my family, then I’m not impartial, but I still could use some of the methods EAs are using.
2.a Could be done in many different ways. We could commit massive fraud to generate money that we then donate based on the principles described in 1.
In conclusion, I would see EA as:
A research field that aims to find the best ways to help others
A practical community that aims to put the results of 1 into practice
Both governed by the following values:
Impartiality or radical empathy
Good character or collaborative spirit
Those two values seem to me to reflect the boundaries that the movement’s founders, the most engaged actors, and the biggest funders want to see.
Some people are conducting local prioritisation research, which might sometimes be worthwhile from an impartial standpoint, but giving up on impartiality would radically change the premise of EA work.
Having worked in startups and finance, I can imagine that there might be ways in which EA ideas could be implemented without honesty, integrity, and compassion cost-effectively. Aside from the risks of this approach, I would also see dropping this value as leading to a very different kind of movement. If we’re willing to piss off the neighbours of the power plant, then this will affect the reputation of the solar researchers.
In describing the history of EA, we could include the different tools and frameworks we have used, such as ITN. But these don’t need to be the ones we’ll use in the future, so I see everything else as being downstream from the definition above.
Re-Reading Will MacAksill’s Defining Effective Altruism from 2019, I saw that he used a similar approach that resulted in four claims:
He didn’t include integrity and collaborative spirit. However, he posted in 2017 that these two are among the guiding principles of CEA and other organisations and key people.