Frankly, I’m unsure how much there is to learn from or about Leverage Research at this point. Having been in the effective altruism movement for almost as long as Leverage Research has been around, an organization which has had some kind of association with effective altruism since soon after it was founded, Leverage Research’s history is one of failed projects, many linked to the mismanagement of Leverage Research as an ecosystem of projects. In effective altruism, one of our goals is learning from mistakes, including the mistakes of others, is so we don’t make the same kind of mistakes ourselves. It’s usually more prudent to judge mistakes on a case-by-case basis, as opposed to the actor or agency that perpetuates them. Yet other times there is a common thread. When there is evidence for repeated failures borne of systematic errors in an organization’s operations and worldview, often the most prudent lesson we can learn from that organization is why they repeatedly and consistently failed, and about their environment, for why it enabled a culture of an organization barely ever course-correcting, or being receptive to feedback. What we might be able to learn from Leverage Research is how EA(-adjacent) organizations should not operate, and how effective altruism as a community can learn to interact with them better.
Alright, thanks for letting me know. I’ll remember that for the future.
Hi. I’m just revisiting this comment now. I don’t have anymore questions. Thanks for your detailed response.
I saw this post had negative karma, and I upvoted it again to positive karma. I’m making this comment to signal-boost that I believe this article belongs on the EA Forum; and, that, if one is going to downvote articles like this that by all appearances are appropriate for the EA Forum, it would be helpful to provide a constructive explanation/criticism of them.
I’ve been in the EA community since 2012. As someone who has been in EA for that long, I entered the community taking to heart the intentional stance of ‘doing the most good’. Back then, a greater proportion of the community wanted EA to primarily be about a culture of effective, personal, charitable giving. The operative word of that phrase is ‘personal,’ since even though there are foundations behind the EA movement like the Open Philanthropy Project that have a greater endowment than the rest of the EA community combined might ever hope to earn to give, for different reasons a lot of EAs still think it’s important EA emphasizes a culture of personal giving regardless. I understand and respect that stance, and respect its continued presence in EA. I wouldn’t even mind if it became a much bigger part of EA once again. This is a culture within EA that frames effective altruism as more of an obligation. Yet personally I believe it’s more effective, and does more good, by doing so in a more diverse array of that. I am glad EA has evolved in that direction, and so I think it’s fitting this definition of EA reflects that.
I’ve adopted to develop exclusion criterion for entryists into EA that EA, as a community, by definition, would see as bad actors, e.g., white supremacists. While one set of philosophical debates within EA, and with other communities, is how far, and how fast, the circle of moral expansion should grow. This common denominator in EA seems to imply a baseline agreement common to all of EA that we would be opposed to people who see to rapidly and dramatically shrink the circle of moral concern of the current human generation. So, to the extent someone:
1. shrinks the circle of moral concern;
2. does so to a great degree/magnitude;
3. does so very swiftly;
EA as a community should beware uncritically tolerating them as members of the community.
I’ve been thinking more that we may want to split up “Effective Altruism” into a few different areas. The main EA community should have an easy enough time realizing what is relevant, but this could help organize things for other communities.
People have talked about “splitting up” EA in the past to streamline things, while other people worry about how that might needlessly balkanize the community. My own past observations of trying to ‘split up’ EA, into specialized compartments, it that, more than being good or bad, it doesn’t have much consequence at all. So, I wouldn’t recommend more EAs just make an uncritical try of doing so again, if for no other reason than it strikes me as a waste of time and effort.
As mentioned in this piece, the community’s take on EA may be different from what we may want for academics. In that case one option would be to distill the main academic-friendly parts of EA into a new term in order to interface with the academic world.
The heuristic I use to think about this is to leave the management with the relationship between the EA community and “Group X”, is to let members of the EA community who are part of Group X manage EA’s relationship with Group X. That heuristic could break down in some places, but it seems to have worked okay so far for different industry groups. For EA to think of ‘academia’ as an industry like ‘the software industry’ is probably not the most accurate thing to do. I just think the heuristic fits because EAs in academia will, presumably, know how to navigate academia on behalf of EA better than the rest of us will.
I think what has worked best is for different kinds of academics in EA to lead the effort to build relationships with their respective specializations, within both the public and private sectors (there is also the non-profit sector, but that is something EA is basically built out of to begin with). To streamline this process, I’ve created different Facebook groups for networking and discussions for EAs in different respective profession/career streams, as part of a EA careers public resource sheet. It is a public resource, so please feel free to share and use it however you like.
This is similar to how I describe effective altruism to those whom I introduce to the idea. I’m not in academia, and so I mostly introduce it to people who aren’t intellectuals. However, I can trace some of the features of your more rigorous definition in the one I’ve been using lately. It’s: ” ‘effective altruism’ is a community and movement focused on using science, evidence, and reason to try solving the world’s biggest/most important problems”. It’s kind of clunky, and it’s imperfect, but it’s what I’ve replaced “to do the most good” with, which generically stated presents the understandable problems you went over above.
This is a recent criticism of Givewell that I didn’t see responded to or accounted for in any clear way in the linked post. I haven’t read the whole thing closely yet, but no section appears to go over the considerations raised in that post. If they were sound, these criticisms incorporated into the analysis might make Givewell’s top-recommended charities look more ‘beatable’. I was wondering if I was missing something in the post, and Open Phil’s analysis either accounts for or incorporates for that possibility.
Do you know if these take into account criticisms of Givewell’s methodology for estimating the effectiveness of their recommended charities?
Thanks for the response, Aaron. Had I been aware this post would have received Frontpage status, I would not have made my above comment. I notice my above comment has many votes, but not a lot of karma, which means it was a controversial comment. Presumably, at least several people disagree with me.
1. I believe the launch of new EA-aligned organizations should be considered of interest to people who browse the Frontpage.
2. It’s not clear to me that it’s only people who are ‘relatively new to EA’ who primarily browse the Frontpage instead of the Community page. While I’m aware the Frontpage is intended primarily for people relatively new to EA, it’s not clear to me the usage of the EA Forum is such that it’s only newcomers to EA who primarily browse the Frontpage. Ergo, it seems quite possible there are a lot of people who are committed EA community members, who are not casually interested in each update from every one of dozens of EA-aligned organizations. So, they may skip the ‘Community’ page, while nonetheless there are major updates like these that are more ‘community-related’ than ‘general’ EA content, but nonetheless deserve on the Frontpage, where people who do not browse the community tab often, who are also not newcomers to EA, will see them.
3. I understand why there would be some hesitance to move posts announcing the launch of new EA-aligned projects/organizations to the Frontpage. The problem is there aren’t really hard barriers to just anyone declaring a new project/organization aimed at ‘doing good’ gaming EA by paying lip service to EA principles and practices, but, behind the scenes, the organization is not (intending/trying to be) as effective or altruistic as they claimed to be. One reason this problem intersects with moving posts to the Frontpage of the EA Forum is because to promote just any new project/organization that declares themselves to be EA-aligned to a place of prominence in EA sends the signal, intentionally or not, that this project/org has received a kind of ‘official EA stamp of approval’. Why I brought up Michael Plant’s reputation is not because I thought anyone’s reputation alone should dictate what assignment on the EA Forum their posts receive. I just mentioned it that, on the chance Aaron or the administration of the EA Forum was on the fence about whether to promote this post to the Frontpage or not, I wanted to vouch for Michael Plant as an EA community member whose reputation of commitment of fidelity to EA principles and practices in the projects he is involved with is such that, on priors, I would expect the new project/org he is launching, and its announcement, to be that which the EA Forum should be willing to put its confidence behind.
4. I agree ideally the reputation of an individual EA community member should not impact what we think of the content of their EA Forum posts. I also agree that in practice we should aspire to live this principle in practice as much as possible. I just also believe that it’s realistic to acknowledge EA is a community of biased humans like any other, and so forms of social influence like individual reputation still impact how we behave. For example, if William MacAskill or Peter Singer were to announce the launch of a new EA-aligned project/org, not exclusively based on their prior reputation, but based on their prior reputation, barring a post they made to the EA Forum reading like patent nonsense, which is virtually guaranteed not to happen, I expect it would be promoted to the Frontpage. My goal in vouching for Michael Plant is, while he isn’t as well-known in EA as Profs. MacAskill or Singer, was to indicate I believe he deserves a similar level of credit in the EA community as a philosopher who practices EA with impeccable fidelity.
5. I also made my above comment under perceiving the norms for which posts are assigned to the ‘Community’ or ‘Frontpage’ posts to be ambiguous. For the purposes of what kinds of posts announcing the launch of a new EA-aligned project/org will be assigned to the Frontpage, I find the following from Aaron sufficient and satisfactory clarification of my prior concerns:
I think detailed posts that explain a specific approach to doing the most good make sense for this category, and this post does that while also happening to be about a new organization. Some but not all posts about new organizations are likely to be assigned Frontpage status.
6. Aaron dislikes my use of the word ‘relegate’ to describe the assignments of posts on the EA Forum to the Frontpage or the Community page, respectively. I used the word ‘relegate’, because that appears to be how promotions to the Frontpage on LessWrong work, and because I was under the impression the EA Forum had similar administration norms to LessWrong. Since the EA Forum 2.0 is based on the same codebase as LW 2.0, and the same team that built LW2.0 also was crucial in the development of the EA Forum2.0, I was acting under the assumption the EA Forum admin team significantly borrowed admin norms from the LW2.0 team from which they inherited administration of the EA Forum 2.o. In his above comment, Aaron has clarified the distinction between the ‘Frontpage’ and other tabs on the EA Forum is not the same as the distinction between the ‘Frontpage’ and other tabs on LW.
7. While the distinctions between Frontpage and and Community sections are intended to serve different purposes, and not as a measure of quality, because of the availability heuristic, I worry one default outcome of ‘Frontpage’ posts, well, being on the frontpage on the EA Forum, and their receiving more attention, meaning they will be assumed to be of higher quality.
These are the reasons that motivated me to make my above comment. Some but not all of these concerns are entirely assuaged by Aaron’s response. All my concerns specifically regarding EA Forum posts that are announcements for new orgs/projects are assuaged. Some of my concerns with ambiguity between which posts will be assigned to the Frontpage or Community tabs respectively remain. However, they hinge upon disputable facts of the matter that could be resolved alone by EA Forum usage statistics, specifically comparative usage stats between the Community and Frontpage tabs. I don’t know if the EA Forum moderation team has access to that kind of data, but I believe access to such usage stats could greatly aid in resolving my concerns regarding how much traffic each tab, and its respective posts, receive.
While updates from individual EA-aligned organizations are typically relegated to the ‘community’ page on the EA Forum, I believe an exception should be made for the public announcement for the launch of a new EA-aligned organization, especially one that takes a focus area that doesn’t already have major professional representation in EA. I believe that such announcements are of interest to people who browse the EA Forum, including newcomers to the community, and is not what I would call just a ‘niche’ interest in EA. Also, specifically with the case of Michael D. Plant, I believe he is someone whose reputation in EA precedes him such that we should give credit as the announcement of this project launch to be of significant interest to the EA community, and of things coming out of EA that are of interest to the broader public.
It isn’t meant to mean software engineering, but all engineering. Unfortunately, aside from the FB group I made, I wasn’t aware of any other EA materials and resources for engineers other than specifically for software engineering.
I’m sure lots of lefties would not like how market-friendly EA tends to be
It’s unclear to me how representative this is of either EA or leftists. Year over year, the EA survey has shown the vast majority of EA to be “left-of-centre”, which includes a significant portion of the community whose politics might very well be described as ‘far-left’. So while some leftists might be willing to surmise from one EA-aligned organization, or a subset of the community, being market-friendly as representative of how market-friendly all of EA is, that’s an unsound inference. Additionally, even for leftist movements in the U.S. to the left of the Democratic establishment, there is enough ideological diversity I would say many of them appreciate markets enough such that they’re not ‘unfriendly’ to them. Of course there are leftists who aren’t friendly to markets, but I’m aware of a phenomenon of some factions on the Left to claim to speak on behalf of the whole Left, when there is no reason in the vast majority of these cases to think it’s a sound conclusion to draw that the bulk of the Left is hostile to markets. So, while ‘a lot’ of leftists may be hostile to markets, and ‘a lot’ of EA may be market-friendly, without being substantiated with more empirical evidence and logical qualification, those claims don’t provide useful info we can meaningfully work with.
Current Affairs overall is fairly amenable to EA and has a large platform within the left. I don’t think “they are a political movement that seeks attention and power” is a fair or complete characterization of the left. The people I know on the left genuinely believe that their preferred policies will improve people’s lives (e.g. single payer, increase minimum wage, more worker coops, etc.).
I think you’re misinterpreting. I never said that was a complete characterization, and fairness has nothing to do with it. Leftist movements are political movements, and I would say they’re seeking attention and power like any and every other political movement. I’m on the Left as well, and that I and the people who are leftists genuinely believe our preferred policies will indeed improve people’s lives doesn’t change the fact the acquisition of political power to achieve those goals, and acquiring the requisite public attention to achieve that political power, is necessary to achieve those goals. To publicly acknowledge this can be fraught because such language can be easily, often through motivation, interpreted by leftists or their sympathizers as speaking of a political movement covetous of power for its own sake. If one is too sheepish to explain otherwise, and stand up for one’s convictions, it’s a problem. Yet it shouldn’t be a problem. I’ve read articles written by no less than Current Affairs’ editor-in-chief Nathan Robinson that to talk about power is something all leftists need to do more of.
Strongly upvoted. I don’t have anything else to add right now then I now understand why you’re asking this question as you have, and that I agree it makes sense as a first step with the background assumptions you’re coming in with.
I think your suggestion of Good Ventures making more grants to the EA Funds would be a better alternative, though before that I’d like to be confident the kinks have been worked out of the EA Funds management system. I was speaking more generally, though, that all kinds of generic structures that merely decentralized grantmaking in EA more would be better. That it could almost be any structure that had that feature was my point. I’m aware there are reasons people might behave as though so much decision-making being concentrated in Open Phil is optimal. If you have knowledge there is a significant portion of the EA community who indeed sincerely believes the current structure for capital allocation being so concentrated as it is, please let me know. I would act on that, as I would see that as a ludicrous and dangerous notion for all of EA I wouldn’t think even Open Phil or Good Ventures would condone.
Like I said in my above comment, asking interesting questions to avoid stating inconvenient if valuable opinions doesn’t go far in EA. If you think so much centralization of decision-making in Open Phil in the person of Holden Karnofsky is suboptimal, and there are better alternatives, why not just say so?
I think it’s unimportant. I would hope everyone is already aware we’ve arrived where we’re at for contingent reasons. I think it’s more than plausible we could have an alternative structure for capital allocation than the one we have now. I think this first step should have been combined with the next couple steps to just be its own first step.
Michael Dickens took the opposite route and said Open Phil should prioritize wild animal welfare. I also remember last year there were lots of people just asking questions about whether the EA Funds should be managed differently, and nothing happened, and then I made a statement more than a question, and then the EA Funds changed a lot.
Right, but if all he is doing is signing off, then you’re attributing to him only the final part of the decision, and treating that as if it’s the whole decision.
Right, I guess I was asking why you’re exploring it.
I don’t think will get a better structure through the route you’re going, which is just asking questions about Open Phil. I figure at the least one would try figuring out what structure they consider best, and then explaining why you think it’s the case Good Ventures should switch to that structure.