I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . .
Jason
(Theres a typo in the title: 2025, not 2023.)
4 - Following the crisis, the movement enters a period of retrenchment and disillusionment—this is where EA is currently. This decline could take a variety of forms: declining numbers of explicitly signed-up members, the gradual plateauing and waning of the group’s political influence, or significant numbers of prominent members distancing themselves from the movement. This is the most ‘you know it when you see it’ criteria of the four presented, and hard to be exact about historically as often the rise of movements are more closely studied than the gradual falls. Nevertheless, all of the candidates I’ve found do show this pattern of decline.
Were movements in which adherents / influence / resources flowed ~productively into a ~spiritual-successor movement within the scope of your research? Admittedly, drawing a line between the original movement and the spiritual successor movement could be a bit tricky.
Conditioned on EA is in decline, we might be able to learn from those kinds of movements how to decline gracefully and in a way that best empowers spiritual-successor movements.
I don’t see a ton of overlap here. There are lots of social movements, and meaningful engagement with other social movements does take time, energy, and focus for both movements. Unless there is high overlap or unusual synergies, sometimes it is better for both movements to basically ignore each other. (I would emphasize that the points below apply to whether it is in radical feminism’s interests to expend resources on engaging with EA as much as the converse.)
For instance, although Open Phil has funded work on reducing incarceration rates in the US, that isn’t a current focus of any appreciable segment of the EA community to my knowledge. And to the extent that radical feminists are also working in or near core EA cause areas, it’s plausible that most radical feminists and most EAs have different values and goals that cannot be harmonized with better understanding of different approaches. The idea that their values in these areas are fundamentally compatible is plausible, but would need evidentiary support.
Is there evidence of meaningful competition between the two groups for the same donors and funding sources? Based on your description so far, the movements seem different enough to me that I would expect very few donors to be realistically open to funding EAs (but listening to radical feminist advisors), or vice versa.
In my view, EA generally shouldn’t say much at all about “issues [it is] poorly suited to solving” (and I suspect the same is true of radical feminism). If EA methodologies are not well-suited to solving a problem, then they probably aren’t well suited to figuring out which of the numerous other altruistic social movements are best situated to solve the problem either. Moreover, trying to recommend charities or charitable approaches in a bunch of non-EA cause areas, and doing a good job of it, would be a costly endeavor at best.
And realistically, there are tons of different altruistic or altruism-adjacent social movements, and there may be many of the size or significance of radical feminism. Expecting one’s reader to do a lot of research on one specific movement is a rather heavy ask.
My starting point would be to give the MA group a good bit of breathing room here. Based on this quote, it appears that Bregman is intentionally trying to do something distinct from EA. I think there’s a lot of potential value in that approach, and would be concerned about interfering with it. That may change as MA becomes more established, but for now I think it makes sense for MA to focus on being its own thing with a clear separation from EA.
While I too suspect that some of the distancing is “for PR reasons,” I suspect there is more to it than that. The quote suggests that Bregman is aiming for a movement with a broader scope rather than focusing as much on the recruitment of elite, highly engaged individuals. I personally think that is a vast area that EA has been largely unable to tap (in part for cultural reasons), and I’m not sure if significant interfacing with EA early on is going to help MA tap it. Once it has its own culture and is more developed, MA should be in a position to work more closely with EA without being swallowed by it.
Of course, MA will develop its own weaknesses and turnoffs. But there’s significant value in those weaknesses being somewhat different than the weaknesses and turnoffs of the EA community. We want to maximize the number of individuals who will find a comfortable home in a effectiveness-focused community of altruism, and having the EA-like movements be too similar doesn’t move us toward that goal.
Rubenstein says that “As the low-hanging fruit of basic health programs and cash transfers are exhausted, saving lives and alleviating suffering will require more complicated political action, such as reforming global institutions.” Unfortunately, there’s a whole lot of low-hanging fruit out there, and things have recently gotten even worse as of late with the USAID collapse and the UK cutting back on foreign aid.
In general, as the level of EA’s involvement and influence in a given domain increases, the more I start to be concerned about the sort of things that Rubenstein worries about here. When a particular approach is at a smaller size, it’s likely to concentrate on niches where its strengths shine and its limitations are less relevant. I would put the classic GiveWell-type interventions in that category, for instance. Compared to the scope of both the needs in global health & development and the actions of other actors, EA is still a fairly small fish.
I have only speculation, but it’s plausible to me that developments in AI could be playing a role. The original decision in 2000 was to sunset “several decades after [Bill and Melinda Gates’] deaths.” Likely the idea was that handpicked successor leadership could carry out the founders’ vision and that the world would be similar enough to the world at the time of their death or disability for that plan to make sense for several decades after the founders’ deaths. To the extent that Gates thought that the world is going to change more rapidly than he believed in 2000, this plan may look less attractive than it once did.
What percent of my disposable income (After needed expenses) should a young uni student set aside for fuzzies, versus just giving it all to utilons and only doing free fun stuff (e.g. public TV, only donated clothes etc)
Given the “likely to be under £500 a year”—this is not intended as an opinion for all uni students.
I would still support de minimis effective giving for habit-forming reasons, among others.
Fair, although it’s more the size of the bet relative to the individual’s income and resources that is a signal of seriousness. Using raw bet size overestimates the seriousness of people with more income and resources relative to (e.g.,) students, people in developing countries, or people with more modest means.
Institutional Trust
To embrace EA, you need to believe that at least some of its flagship organizations and leaders—80,000 Hours, Will MacAskill, Giving What We Can, etc.—are both well-intentioned and capable. Importantly, many skeptics leap straight to this “top of the trunk,” accusing EA groups of corruption or undue influence (e.g., “Open Philanthropy takes dirty billionaire money”).
While those concerns deserve a thoughtful debate, they should come after someone already agrees that (i) helping strangers matters, (ii) doing more good is better than doing a little, and (iii) we can meaningfully compare different interventions. In other words, don’t let institutional distrust be the very first deal-breaker—focus on the roots before you tackle the branches.
I don’t quite follow the logic here. Your first paragraph seems to acknowledge that some degree of institutional trust is part of the trunk rather than merely the branches, but the end of the second paragraph characterizes it as a branches issue.
I’d agree that institutional trust is in a sense less foundational than “root” issues like altruism and effectiveness, but being less foundational does not imply it is less practically critical to reach the end result. If A and B and C and D are all practically essential to reach any of E through H, it’s reasonable for someone who is being invited in to start with whichever of A-D they think is weakest out of respect for their time.
As an aside, if one goes so far as to say that EA as currently constituted doesn’t have anything meaningful to offer to those who do not “believe that at least some of its flagship organizations and leaders—80,000 Hours, Will MacAskill, Giving What We Can, etc.—are both well-intentioned and capable,” [1] then maybe that is a signal something is wrong.
- ^
This is further along than your statement that this belief is necessary to “embrace” EA, so I don’t want to imply that it is your view.
- ^
We are stronger together, and I hope to demonstrate that each movement contains immense power to help the other.
This is plausible, but not obvious.
My default model is more along the lines of altruistic pluralism. Having a number of altruistic communities, each pursuing its distinct goals, strategies, and objectives with vigor generally strikes me as a good thing. In that universe, we get the benefits of each community not watered down with a bunch of other stuff. Each movement has the ability to adapt to its own niche, wherein it can play to its strengths and is less impeded by the tradeoffs it accepted along the way. Although synergies exist, I submit that there is a considerable risk of creating something like the United Way or altruistic nutraloaf by trying to mix a bunch of different and somewhat inconsistent approaches into a Grand Unified Theory of altruism.
Here, it seems to me that there would be considerable costs to both EA and radical feminism from a synergized approach. On the topic of donor relations, I predict that EA would end up irritating its donors in an attempt to be minimally acceptable to radical feminists, and radical feminism would have to seriously water down its critique of capitalism to make synergy potentially viable. I suspect you’d see more anti-synergies than synergies in other domains as well. For instance, being perceived as sympathetic toward radical feminism is going to hurt ability to influence the current US regime and other regimes on AI safety, while being perceived as sympathetic to EA is likely to hurt radical feminism’s relationship with more naturally allied movements. I’m just not seeing enough benefits to either movement over those available in a more pluralistic structure to overcome the costs.
I don’t have a good way to fully disentangle “is this criticism” (the purpose of scope statement you quoted, intended to power a poll) and “is this criticism for which advance notice should be provided.” But I’ll address my personal opinion on the latter (and two of three have relevant exclusions in the post as well):
The recent discussions around Epoch/Mechanize/ex-Epoch employees.
Excluded as “in response to a semi-recent report / blog post / etc. by the criticized person or organization itself.” Founding a company falls into the same class of events for which (1) a reasonable organization should expect to be prepared for relevant criticism in the aftermath of its recent action and (2) a notice expectation would impair the Forum’s ability to react appropriately to off-Forum events currently happening in the world. There’s also not much to prepare for in any event.
Re-analysis of an orgs published cost-effectiveness that would put its cost-effectiveness well below its current funders published funding bar.
Possibly criticism (as long as the CEA was not recent). I would generally prefer that advance notice be provided but there’s a good chance I wouldn’t judge the critic for not providing it:
I don’t think this type of criticism necessarily has a negative effect on reputation, although some of it certainly can (e.g., the recent VettedCauses / Singeria dispute).
The nature and depth of what is being criticized matters. If this is a larger charity with resources to put forth a polished CEA, I am less likely to want to see advance notice than for a smaller charity or program. The more the critique relies on interpolations and assumptions, the more I want to see advance notice.
One issue here is that we want to incentivize orgs to make their work public rather than keeping it under wraps. If the community supports criticism without giving the organization a chance to contemporaneously respond, that is going to disincentivize publishing detailed stuff in the first place.
To my recollection, this stance is broadly consistent with how the community responded to various StrongMinds/HLI posts—it praised the provision of advance notice, but didn’t criticize its non-provision. My subjective opinion is that the conversations with advance notice were more productive and helpful.
Something like the recent discussions around people at Anthropic not being honest about their associations with EA, except it comes up randomly instead of in response to an article in a different venue.
This is criticism, but is not sufficiently “of an EA person or organization”—Anthropic is not an EA organization, and the quoted employees were acting primarily in their official capacity on behalf of a multi-billion dollar corporation. They are AI company executives who also happen to be EAs (well, maybe?). Even if one were to conclude otherwise, there are strong case-specific reasons to waive the expectation (including that advance notice would be futile; the quoted people were never going to come here and present a defense of their statements).
There are almost no examples of criticism clearly mattering (e.g. getting someone to significantly improve their project)
I don’t know what “clearly mattering” means, but I think this characterization unduly tips the scales. People who don’t like being criticized are often going to be open about that fact, which makes it easier to build an anti-criticism case under a “clearly” standard.
Also, “criticism” covers a lot of ground—you may have a somewhat narrower definition in mind, but (even after limiting to EA projects with <10 FTEs) people are understandably reacting to a pretty broad definition.
The most obvious use of criticism is probably to deter and respond to inappropriate conduct. Setting aside whether the allegations were sustained, I think that was a major intended mechanism of action in several critical pieces. I can’t prove that having a somewhat pro-criticism culture furthers this goal, but I think it’s appropriate to give it some weight. It does seem plausible on the margin that (e.g.) orgs will be less likely to exaggerate their claims and cost-effectiveness analyses given the risk of someone posting criticism with receipts.
A softer version of this purpose could be phrased as follows: criticism is a means by which the community expresses how it expects others to act (and hopefully influences future actions by third parties even if not by the criticized organization). In your model, “public critique clearly creates barriers to starting new projects,” so one would expect public critique (or the fear thereof) to influence decisions by existing orgs as well. Then we have to decide whether that critique is on the whole good or not.
Criticism can help direct resources away from certain orgs to more productive uses. The StrongMinds-related criticisms of 2023 come to mind here. The resources could include not only funding but also mindshare (e.g., how much do I want to defer to this org?) and decisions by talent. This kind of criticism doesn’t generally pay in financial terms, so it’s reasonable to be generous in granting social credit to compensate for that. These outcomes could be measured, but doing so will often be resource-intensive and so they may not make the cut under a “clearly” standard either.
Criticism can also serve the function of market research. The usual response to people who aren’t happy about how orgs are doing their work is to go start their own org. That’s a costly response—for both the unhappy person and for the ecosystem! Suppose someone isn’t happy about CEA and EA Funds spinning off together and is thinking about trying to stand up an independent grantmaker. First off, they need to test their ideas against people who have different perspectives. They would also need to know whether a critical mass of people would move their donations over to an independent grantmaker for this or other reasons. (I think it would also be fair for someone not in a position to lead a new org to signal support for the idea, hoping that it might inspire someone else.)
It’s probably better for the market-research function to happen in public rather than in back channels. Among other things, it gives the org a chance to defend its position, and gives it a chance to adjust course if too many relevant stakeholders agree with the critic. The counterargument to this one is that little criticism actually makes it into a new organization. But I’m not sure what success rate we should expect given considerable incumbency advantage in some domains.
That’s a much different (and more demanding) proposition than the one on which votes have already been cast. One might pose it as a separate question, though.
I don’t think the presence of elected people in some national groups materially impacts the poll as written. From the perspective of most voters (who do not live in Nordic countries), I believe there are no elected leaders. Some imprecision is hard to avoid given the practical limitations of the polling tool.
There exists a cause which ought to receive >20% of the EA community’s resources but currently receives little attention
but no, I don’t know what it is (or have a clear and viable plan for finding it)
Assuming there will continue to be three EAG-like conferences each year, these should all be replaced by conferences framed around specific cause areas/subtopics rather than about EA in general (e.g., by having two conferences on x-risk or AI-risk and a third one on GHW/FAW)
Some but not all should be replaced (low confidence)
Agree that the appropriate amount of time depends—but I also think there needs to be some sort of semi-clear safe harbor here for critics here. Otherwise we are going to get excessively tied up in the meta debate of whether the critic gave the org enough advance notice.
Yeah, I suspect most people (including myself) think it depends. I conceptualize the right side of the scale roughly as “there’s a presumption of advance notice, and where you place your icon on the right side is ~ about how strongly or weakly the case-specific factors need to favor non-notice to warrant a departure from the presumption”
Giving meaningful advance notice of a post that is critical of an EA person or organization should be
I think it’s a good default rule, but think there are circumstances in which that presumption is rebutted.
My vote is also influenced by my inability to define “criticism” with good precision—and the resultant ambiguity and possible overinclusion pushes my vote toward the midpoint.
Although the Early OSP intervention at those universities could be influencing the outcomes—checking elite universities at which Early OSP wasn’t offered could control for this.