Rockwell
Thank you for this timely and transparent post, and for all the additional work I’m sure your team is shouldering in response to this situation.
With Giving Tuesday and general end-of-year giving on the horizon, I think any indication from OPP of new anticipated funding gaps would be useful to the EA community as a whole. It would also be helpful to get a sense as soon as the information is available of what the overall cause area funding distribution in EA is likely to look like after this week.
I’m also sorry to hear this has been your experience and as both a woman and the director of EA NYC, I am always here to discuss cultural issues and possible recourse, as is our dedicated EA NYC community health coordinator, Megan Nelson: https://docs.google.com/forms/d/e/1FAIpQLSerQKtoQULEjuGGqTaKPoqj-WwCZJqKKri_pi0fdQ-Ag5xANQ/viewform We both think about these topics a lot and are working to steer the community in an increasingly healthy and safe direction.
I found this post very informative. Thank you for sharing.
Some miscellaneous questions:There was significant disagreement whether OP should start a separate program (distinct from Claire’s and James’ teams) focused on “EA-as-a-principle”/”EA qua EA”-grantmaking.
1. Is there information on why Open Phil originally made the decision to bifurcate community growth funding between LT and GHWB? (I’ve coincidentally been trying to better understand this and was considering asking on the Forum!) My impression is that this has had extreme shaping effects on EA community-building efforts, possibly more so than any other structural decision in EA.
There was consensus that it would be good if CEA replaced one of its (currently) three annual conferences with a conference that’s explicitly framed as being about x-risk or AI-risk focused conference.
Open Phil’s Longtermist EA Community Growth team expects to rebalance its field-building investments by proportionally spending more on longtermist cause-specific field building and less on EA field building than in the past
2. There are two perspectives that seem in opposition here:
The first is that existing organizations that have previously focused on “big tent EA” should create new x-risk programming in the areas they excel (e.g. conference organizing) and it is okay that this new x-risk programming will be carried out by an EA-branded organization.
The second is that existing organizations that have previously focused on “big tent EA” should, to some degree, be replaced by new projects that are longtermist in origin and not EA-branded.
I share the concern of “scaling back on forms of outreach with a strong track-record and thereby ‘throwing out the baby with the bathwater.’” But even beyond that, I’m concerned that big tent organizations with years of established infrastructure and knowledge may essentially be dismantled and replaced with brand new organizations, instead of recruiting and resourcing the established organizations to execute new, strategic projects. Just like CEA’s events team is likely better at arranging an x-risk conference than a new organization started specifically for that purpose, a longstanding regional EA group will have many advantages in regional field-building compared to a brand-new, cause-specific regional group. We are risking losing infrastructure that took years to develop, instead fo collectively figuring out how we might reorient it.In March 2023, Open Philanthropy’s Alexander Berger invited Claire Zabel (Open Phil), James Snowden (Open Phil), Max Dalton (CEA), Nicole Ross (CEA), Niel Bowerman (80k), Will MacAskill (GPI), and myself (Open Phil, staffing the group) to join a working group on this and related questions.
3. Finally, I would love to see a version of this that incorporates leaders of cause area and big tent “outreach/recruitment/movement-building” organizations who engage “on the ground” with members of the community. I respect the perspectives of everyone involved. I also imagine they have a very different vantage point than our team at EA NYC and other regional organizations. We directly influence hundreds of people’s experiences of both big-tent EA and cause-specific work through on-the-ground guidance and programming, often as one of their first touchpoints to both. My understanding of the value of cause-specific work is radically different from what it would have been without this in-person, immersive engagement with hundreds of people at varying stages of the engagement funnel, and at varying stages of their individual progress over years of involvement. And though I don’t think this experience is necessary to make sound strategic decisions on the topics discussed in the post, I’m worried that the disconnect between the broader “on the ground” EA community and those making these judgments may lead to weaker calibration.
I appreciate the extensive time and effort you’ve put into this post/project, and I also find the framing odd and potentially misleading. Health risks change when someone stops eating animal products, but the health risks of a vegan diet are substantially less bad than the health risks of a standard diet.
I believe you overstate the risks of nutrient insufficiency generally and largely fail to engage with the health ramifications of animal product consumption. The “trade-off” is a possible increased risk of nutrient deficiency and decreased risk of a host of pervasive and debilitating health issues. If option A is “the nutrient deficiencies you found in previous research, such as iron and Vitamin D, which can have palpable effects if left unaddressed” and option B is “the standard risk of nutrient insufficiencies/deficiencies and an increased for cardiovascular disease, cancer, diabetes, obesity, and foodborne illness,” I think most people would readily opt for option A. All else equal and ethics aside, I’d personally rather take some supplements than increase my risk of cancer or salmonella or E. coli.
The “trade-off,” as a result, ends up skewing in a positive direction for most people switching from e.g. the Standard American Diet to a vegan diet; the vegan diet just comes with new, readily addressable, and substantially less scary risks.People should pay attention to their nutrition. They should e.g. get blood tests at their yearly checkups. And no one should act like veganism is a silver bullet to all health issues. But I find this post overall misleading.
While I think I disagree pretty strongly with the idea CEA CH should be disbanded, I would like to see an updated post from the team on what the community should and should not expect from them, with the caveat that they may be somewhat limited in what they can say legally about their scope.
Correct me if I’m wrong but I believe CEA was operating without in-house legal counsel until about a year ago. This was while engaging in many situations that could have easily led to a defamation suit should they have investigated someone sufficiently resourced and litigious. I think it makes sense their risk tolerance will have shifted while EVF is under Charity Commission investigation post-FTX and with the hiring of attorneys who are making risk assessments and recommendations across programs.
The issue for me is less “are they doing everything I’d like them to do” and more “does the community have appropriate expectations for them,” which is in keeping with the general idea EA projects should make their scopes transparent.
I’m really glad you chose to make this post and I’m grateful for your presence and insights during our NYC Community Builders gatherings over the past ~half year. I worry about organizers with criticisms leaving the community and the perpetuation of an echo chamber, so I’m happy you not only shared your takes but also are open to resuming involvement after taking the time to learn, reflect, and reprioritize.
Adding to the solutions outlined above, some ideas I have:
• Normalize asking people, “What is the strongest counterargument to the claim you just made?” I think this is particularly important in a university setting, but also helpful in EA and the world at large. A uni professor recently told me one of the biggest recent shifts in their undergrad students has been a fear of steelmanning, lest people incorrectly believe it’s the position they hold. That seems really bad. And it seems like establishing this as a new norm could have helped in many of the situations described in the post, e.g. “What are some reasons someone who knows everything you do might not choose to prioritize AI?” • Greater support for uni students trialing projects through their club, including projects spanning cause areas. You can build skills that cross cause areas while testing your fit and achieving meaningful outcomes in the short-term. Campaign for institutional meat reduction in your school cafeteria and you’ll develop valuable skills for AI governance work as a professional. • Mentorship programs that match uni students with professionals. There are many mentorship programs to model this on and most have managed to avoid any nefariousness or cult vibes. • Restructuring fellowships such that they maintain the copy-paste element that has allowed them to spread while focusing more on tools that can be implemented across domains. I like the suggestion of a writing fellowship. I’m personally hoping to create a fellowship focused on social movement theory and advocacy (hit me up if interested in helping!).
Thanks for this update and for all the work you’re doing!
A potential pivot toward AI safety feels pretty significant, especially for such a core “big tent EA” team. Is it correct to interpret this as a reflection of the team’s cause prioritization? Or is this instead because of (1) particularly poor community health in the non-EA AI safety community relative to other causes, (2) part of a plan to spin off into other EA-adjacent spaces, or (3) something else?
Thanks! I’m happy to expound.
I’ve tried categorizing the public attendee list by their area of meta EA work. There are many different ways to categorize and this is just one version I put together quickly. It looks something like:
Funding
Fundraising
Grantmaking
Programming
Events
Education
Advising
Growth and Strategy
Service providers (Not included in the public list)
Comms (Not included in the public list)
Incubation
Community Health
Field-building
High-level meta EA
OP, CEA, EVF, LessWrong
“On the ground” meta EA (Not included in the public list)
Regional organizations
Professional organizations
University groups
Kuhan checks this last box but also has a cause-specific bent
While the people listed make critical decisions regarding resource allocation, granting, setting strategic directions, or providing critical infrastructure, their experience is fundamentally different from those who are directly involved in “on the ground” organizations. Vaidehi writes that “issues pertinent to the community need to have meaningful, two way, sustained engagement with the community.” “On the ground” organizations likely do this among the most of any orgs in the EA ecosystem.
I think the perspective of the wider breadth of “on the ground” community leaders is important, but I’ll speak to regional EA organizations, as that’s what I know best:
Before the FTX collapse, there was a heavy emphasis on making community building a long-term and sustainable career path. As a result, there are now dozens of people working professionally and often full-time on meta EA regional organizations (MEAROs).[1] By and large, we are a team of sorts: we’re in regular communication with each other, we have a shared and evolving sense of what MEAROs are and can be, and our strategic approaches intertwine and are mutually reinforcing. We essentially function as extended colleagues in a niche profession that feels very distinct to me from even other “on the ground” meta-EA community building (such as professional or uni groups). I don’t think anyone on the attendee list has run a MEARO, and certainly not in 2023.
There is a distinct zeitgeist among MEAROs. Consistently, I’ve been amazed how MEARO leaders seem to independently land on the same conclusions and strategic directions as our peers across the globe, “multiple discovery” if you will. This zeitgeist is not captured in larger EA discourse, from the Forum to conversations I have with non-MEARO community leaders. And this MERAO zeitgeist is evolving rapidly, such that it looks very different from even four months ago. As a result, I don’t think anyone who hasn’t been intimately involved in MEAROs in the past 3-6 months can represent our general shared perspective.
This shared perspective is born out of three main ingredients:
“On the ground” intensive feedback loops: We are engaging directly with community members at all stages of the funnel—across EA causes and professions—understanding their concerns, aspirations, and challenges in real time. This provides a richness of information on everything from how people are finding EA, to reactions to current events, to what HEAs see as their biggest needs from community builders. Think of us as carrying out unofficial and constant surveying on everything you’d want the broader EA community’s feedback on.
High-level EA org feedback: EA orgs and projects from throughout the ecosystem consistently correspond and collaborate with MEAROs in a way that provides us with a decently holistic and up-to-date understanding of where EA is and where it is headed.
MEARO-level strategy: It is our job to think about what MEAROs are and what they should be to achieve maximum impact. We arguably have the most mental bandwidth for this task of anyone in EA and, again, this is shifting dramatically as the EA community and the causes we care about rapidly change.
I think segments of #1 and #3 are captured by some of the publicly listed attendees, and I imagine the attendees have an equally good or even substantially better experience of #2, but it is the unique perspective that the combination of the three enables that I’m referencing.
At an event focused on meta coordination, it seems really important to have the perspective of those engaging constantly and deeply with “the EA masses,” immersed in regional strategy, and among the best able to shape the future of EA perception as the on-the-ground representatives of EA to thousands of people worldwide.
I talked this through with @James Herbert a bit and we discussed three possible cruxes here:
Are the people in the public attendee list doing different work from MEARO leaders?
For example, have they directly done things listed in Patrick’s comment, or advised hundreds of regular people in their geographic region?
If they have, how long is that knowledge valid?
For example, EA looks very different in September 2023 than it did in September 2022, and that changes the nature of some aspects of MEARO leadership more than others.
Does directly doing the type of work involved in operating a MEARO give you a different set of knowledge that is useful in contexts like the Mera Coordination Forum?
I hope the above gestures at why I think the answer is “yes” and believe most other MEARO leaders are likely to agree.
- ^
Yes, I totally just coined this acronym.
Once again, where is the board?
Two of the biggest questions for me are whether or not Nonlinear had a board of directors when Alice and Chloe worked for them and, if they did, whether an employee would know the identities and contact information of the board members and could feel reasonably safe approaching board members to express concerns and seek intervention. I can’t find evidence they had a board at the time of the complaints or do now a year and a half after Alice and Chloe stopped working with them. The only reference to a board of directors I see in the Google Doc is Lightcone’s board, which seems telling on a few levels.
Nonprofit boards are tasked with ensuring legal compliance, including compliance with relevant employment law considerations, and including above board practices in unconventional and riskier structures like Nonlinear chose to operate through. This situation looks very different if a legitimate board is in place than if employees don’t have that safeguard.
Though I’m sad about the hurt experienced by many people across the Nonlinear situation, I’m personally less concerned with the minutiae of this particular organization and more about what structures, norms, and safeguards can be established across the EA ecosystem as a whole to reduce risk and protect EA community members going forward. Boards and institutional oversight are a recurring theme, from FTX to Nonlinear (to maybe OpenAI?) and I’m personally more skeptical of any organization that does not make its board information readily apparent.
Thanks for this. I’ll add that I think even in less tense times, there are alternatives that achieve similar clarity while avoiding community culture downsides. Money is a fraught topic, EA has at times been accused of being classist, and people should be able to engage in EA discourse regardless of their relationship with money.
These comments are helpful but I’m still having a difficult time zeroing in on a guiding heuristic here. And I feel mildly frustrated by the counterexamples in the same way I do reading “well, they were always nice to me” comments on a post about a bad actor who deeply harmed someone or hearing someone who routinely drives drunk say “well, I’ve never caused an accident.” I think most (but not all) of my list falls into a category something like “fine or tolerable 9 times out of 10, but really bad, messy, or harmful that other 1 time such that it may make those other 9 times less/not worth it.” I’m not sure of the actual probabilities and they definitely vary by bullet point.
In your case in particular, I’ll note that a good chunk of your examples either directly involve Julia or involve you (the spouse of Julia, who I assume had Julia as a sounding board). This seems like a rare situation of being particularly well-positioned to deal with complicated situations. Arguably, if anyone can navigate complicated or risky situations well, it will be a community health professional. I’d assume something like 95% of people will be worse at handling a situation that goes south, and maybe >25% of people will be distinctly bad at it. So what norms should be established that factor in this potential? And how do we universalize in a way that makes the risk clear, promotes open risk analysis, and prevents situations that will get really bad should they get bad?
Without commenting on the rest of this case or EA Funds more broadly, this stood out to me:
At the EA funds website, they write that they usually grant money within 21 days from sending an application, and that their managers care (no further specification).
I was surprised the OP would request a response within one month when applying for a grant until I saw this truly is emphasized on the EA Funds site. This seems inconsistent with my understanding of many people’s experiences with EA Funds and easy messaging to change to set more realistic expectations. I appreciate EA funders’ efforts toward quick turnaround times, but traditional funders typically take many months to reach a decision, even for comparably sized (i.e. small) grants. This seems like a strong case for “underpromise, overdeliver.”
I find pieces like this frustrating because I don’t think EA ever “used to be” one thing. Ten people who previously felt more at home in EA than they currently do will describe ten different things EA “used to be” that it no longer is, often in direct conflict with the other nine’s narratives. I’d much prefer people to say, “Here’s a pattern I’m noticing, I think it is likely bad for these reasons, and I think it wasn’t the case x years ago. I would like to see x treated as a norm.”
Woah, the vast majority of undercover investigations carried out by animal advocacy organizations are legal, which is why attempts to make them illegal (such as ag-gag laws in the U.S.) receive so much attention and pushback, with many either not passing or being overturned. I would greatly caution against even casually suggesting an organization is engaged in illegal activity. That said, as you’re possibly getting at in your final sentence, there is a big difference between active, intentional civil disobedience as part of a strategic campaign effort and lax disregard for the law for personal convenience or gain.
Thank you for the update and insight. A few questions:
1. What can the community expect regarding the renewal of funding for projects previously supported by OP that are now below the new bar? Should we expect a wave of projects to see their funding discontinued?
OP is also working on a longer-term project to revisit how we should allocate our resources between longtermist and global health and wellbeing funding; it’s possible that longtermist work will end up with more than 50%, which would leave more room to grow.
2. Can you share more about this process and any potential or anticipated effects for global health and wellbeing program areas?
Thank you for the thorough feedback. Those involved in drafting the statement considered much of what you laid out and created a more substantive, action-specific version before ultimately deciding against it. There were several reasons for this decision, among them: not wanting to commit (often under-resourced) groups to obligations they would currently be unable to fulfill, the various needs and dynamics of different EA communities, and the time-sensitive nature of getting a statement out. We do not intend for this to be the final word and there is already discussion about follow-up collaborations. We also chose to use the footnote method in the statement document to allow groups to make their own additional individual commitments publicly now.
I do want to push back on the idea that this statement is vacuous, counterproductive, and/or harmful. We chose to create this because of our collective, global, on-the-ground experiences discussing recent events with the communities we lead. I agree it should be silly or meaningless to declare one’s opposition to racism and sexism. But right now, for many following EA discourse, it unfortunately isn’t obvious where much of the community stands. And this is having a tangible impact on our communities and our community members’ sense of belonging and safety. This statement doesn’t solve this. But by putting our shared commitment in plain language, I believe we’ve laid a pavestone, however small, on the path toward a version of EA where statements like this truly are not needed.- 20 Feb 2023 20:44 UTC; 99 points) 's comment on A statement and an apology by (
If you somehow could convince a research group, not selected for caring a lot about animals, to pursue this question in isolation, I’d predict they’d end up with far less animal-friendly results.
I think this is a possible outcome, but not guaranteed. Most people have been heavily socialized to not care about most animals, either through active disdain or more mundane cognitive dissonance. Being “forced” to really think about other animals and consider their moral weight may swing researchers who are baseline “animal neutral” or even “anti-animal” more than you’d think. Adjacent evidence might be the history of animal farmers or slaughterhouse workers becoming convinced animal killing is wrong through directly engaging in it.
I also want to note that most people would be less surprised if a heavy moral weight is assigned to the species humans are encouraged to form the closest relationships with (dogs, cats). Our baseline discounting of most species is often born from not having relationships with them, not intuitively understanding how they operate because we don’t have those relationships, and/or objectifying them as products. If we lived in a society where beloved companion chickens and carps were the norm, the median moral weight intuition would likely be dramatically different.
I don’t understand this. There exist many long-term contracting relationships that Lightcone engages in. Seems totally fine to me. Also, many people prefer to be grant recipients instead of employees, those come with totally different relationship dynamics.
Just to hopefully quickly clarify this point in particular: There is a legal distinction between full-time employees and contractors that has legal implications, and I think EA orgs somewhat frequently misclassify. It’s totally possible Lightcone has long-term contractor or grant recipient relationships that are fully above board and happy for all involved; however, I know some organizations do not. This can be a cost-saving mechanism for the organization that comes at the expense of not just their adherence to employment law, but also security for their team (e.g. everything from eligibility for benefits, to increased individual tax burden, to ineligibility for unemployment compensation).
Thank you for this post and the work you’re doing. Given the small size and newness of the EA community and many orgs/projects, I’m personally also very worried about something like “right group of people, great practices, unforeseen or unpreventable dependencies that lead to major risk of collapse should a few key things go wrong in succession.” My impression is that in some cases things have been going very well but the external pressures have been so substantial and consistent the past few months that even very stable teams are trembling. Sound practices can help mitigate this, but I also want to see more people feeling ok saying, “this is a shit time and we’re treading water, but we’re able to tread water until we reach shore because we prioritized a healthy infrastructure beforehand.”
I have no personal insight on Nonlinear, but I want to chime in to say that I’ve been in other communities/movements where I both witnessed and directly experienced the effects of defamation-focused civil litigation. It was devastating. And I think the majority of the plaintiffs, including those arguably in the right, ultimately regretted initiating litigation. I sincerely hope this does not occur in the EA community. And I hope that threats of litigation are also discontinued. There are alternatives that are dramatically less monetarily and time-intensive, and more likely to lead to productive outcomes. I think normalizing (threats of) defmation-focused civil litigation is extremely detrimental to community functioning and community health.