Rockwell
“There seems to be movement towards animal welfare interventions and away from global health interventions.”
What is this based on? I don’t believe this tracks with e.g. distribution of EA-associated donations.
The application deadline has been extended and now closes on July 28 at 11:59 pm ET.
My best guess there is also a large U.S./EU difference here.
I do think you need to differentiate the Bay Area from the rest of the US, or at least from the US East Coast.
This seems like it has significant implications for the trajectory of global geopolitical stability and some GCR scenarios. I’m wondering whether or not others following this who are better informed than I am see this as a notable update.
Oh, it sounds like you might be confused about the context I’m talking about this occurring in, and I’m not sure that explaining it more fully is on-topic enough for this post. I’m going to leave this thread here for now to not detract from the main conversation. But I’ll consider making a separate post about this and welcome feedback there.
I do also want to clarify that I have no desire to “control which ideas and people [anyone] is exposed to.” It is more so, “If I am recommending 3 organizations I think someone should connect with, are there benefits or risks tied to those recommendations.”
I really appreciate you sharing your perspective on this. I think these are extremely hard calls, as evidenced by the polarity of the discussion on this post, and to some extent it feels like a lose-lose situation. I don’t think these decisions should be made in a vacuum and want other people’s input, which is one reason I’m flagging how this affects my work and the larger involvement funnels in EA.
Thanks for spelling this out.
I think to give some color to how this affects my work in particular (speaking strictly for myself as I haven’t discussed this with others on my team):
One of our organization priorities is ensuring we are creating a welcoming and hopefully safe community for people to do good better, regardless of people’s identities. A large part of our work is familiarizing people with and connecting them to organizations and resources, including ones that aren’t explicitly EA-branded. We are often one of their first touch points within EA and its niches, including forecasting. Many factors shape whether people decide to continue and deepen their involvement, including how positive they find these early touch points. When we’re routing people toward organizations and individuals, we know that their perception of our recommendations in turn affects their perception of us and of EA as a whole.
Good, smart, ambitious people usually have several options for professional communities to spend their time within. EA and its subcommunities are just one option and an off-putting experience can mean losing people for good.
With this in mind, I will feel much more reluctant to direct community members to Manifold in particular and (EA-adjacent) forecasting spaces more broadly, especially if the community member is an underrepresented group in EA. I think Manifold brings a lot of value, but I can’t in good conscience recommend they plug into communities I believe most people I am advising would find notably morally off-putting.
This is of course a subjective judgement call, I understand there are strong counterarguments here, and what repels one person also attracts another. But I hope this gives a greater sense of the considerations/trade-offs I (and probably many others) will have to spend time thinking about and reaching decisions around as a result of Manifest.
Austin, I’m gathering there might be a significant and cruxy divergence in how you conceptualize Manifold’s position in and influence on the EA community and how others in the community conceptualize this. Some of the core disagreements discussed here are relevant regardless, but it might help clarify the conversation if you describe your perspective on this.
Commenting just to encourage you to make this its own post. I haven’t seen a (recent) standalone post about this topic, it seems important, and though I imagine many people are following this comment section it also seems easy for this discussion to get lost and for people with relevant opinions to miss it/not engage because it’s off-topic.
You might be interested in the information security sphere, which some in the EA community focus on, especially in the context of safe AI. This 80,000 Hours podcast from 2022 is a good overview: https://80000hours.org/podcast/episodes/nova-dassarma-information-security-and-ai-systems/
I’m really excited to see this! Thank you for all the work you’ve put into it!
One piece of feedback (that others in the actual target audience are probably better to weigh in on): I noticed the people page currently seems to exclusively feature white Americans and Europeans, which I imagine might be offputting to some people the website is designed to reach.
I’d like to second Ben and make explicit the concern about platforming ideologues whose public reputation is seen as pro-eugenics.
The best option for you will be very individual! I recommend taking the Animal Justice Academy course for an overview of types of advocacy in the animal movement and the Animal Advocacy Careers course if you are considering a career transition (now or in the future). If you are in/near the US, the AVA Summit in May is also a great way to dive in.
Without commenting on the rest of this case or EA Funds more broadly, this stood out to me:
At the EA funds website, they write that they usually grant money within 21 days from sending an application, and that their managers care (no further specification).
I was surprised the OP would request a response within one month when applying for a grant until I saw this truly is emphasized on the EA Funds site. This seems inconsistent with my understanding of many people’s experiences with EA Funds and easy messaging to change to set more realistic expectations. I appreciate EA funders’ efforts toward quick turnaround times, but traditional funders typically take many months to reach a decision, even for comparably sized (i.e. small) grants. This seems like a strong case for “underpromise, overdeliver.”
I think that’s a common intuition! I’m curious if there were particular areas covered (or omitted) from this post that you see as more clearly the natural function of one versus the other.
I’ll note that a couple factors seem to blur the lines between city and national MEARO functions:-Size of region (e.g. NYC’s population is about 8 million, Norway’s is about 5.5 million)
-Composition of MEAROs in the area (e.g. many national MEAROs end up with a home base city or grew out of a city MEARO, some city MEAROs are in countries without a national MEARO)
I could see this looking very different if more resources went toward assessing and intentionally developing the global MEARO landscape in years to come.
Thank you for writing and sharing this, Alix! I’m sorry that it was scary for you to post and I’m glad you did. You also linked to so many other useful readings I hadn’t seen previously!
I’m wondering how these dynamics play out across different platforms and spaces—e.g. hiring processes for organizations with varying degrees of international staff vs. international online platforms like the Forum or EA Anywhere Slack vs. in-person events—and if there are better moderation mechanisms for acknowledging and accounting for language barriers across each. Online, for example, it’s easy to list the languages you speak and some organizations list this on their staff pages (e.g. “You can contact Alix in French and English.”). Maybe this could be added to Forum profiles or EA Global Swapcard profiles.
I’m also wondering how we can better account for this as community builders, especially in places with many immigrants. We remind attendees at the start of most EA NYC events that everyone present has a different starting point and we all have something to learn and something to teach. We began doing this, in large part, to make sure newcomers who don’t “speak EA” feel welcome. But there might be a benefit to also explicitly noting possible language barriers, given how deeply international and multicultural the community here is. This is also making me want to look into facilitation trainings specifically focused on these dynamics; I’m sure there are non-obvious things we could be doing better.
@abrahamrowe, I’m curious if you have insights on the larger point about good governance across the EA ecosystem. As evidenced by EV’s planned disbanding, sponsorship arrangements have a higher potential to become fraught. The opacity of the relationship between Rethink Charity and Nonlinear might be another example. (I.e. This is further indication Nonlinear employees wouldn’t have had the same protection and recourse mechanisms as employees of more conventionally governed 501c3s, especially those of established 501c3s sizeable enough to hire 21 staff members.) Given RP is growing into one of the larger fiscal sponsors through your Special Projects Team, it might be worth further commentary from the RP team on how you’re navigating risk and responsibility in sponsorship arrangements. Given RP’s track record of proactive risk mitigation, I imagine you all have given this ample thought and it might serve as a template for others.
Once again, where is the board?
Two of the biggest questions for me are whether or not Nonlinear had a board of directors when Alice and Chloe worked for them and, if they did, whether an employee would know the identities and contact information of the board members and could feel reasonably safe approaching board members to express concerns and seek intervention. I can’t find evidence they had a board at the time of the complaints or do now a year and a half after Alice and Chloe stopped working with them. The only reference to a board of directors I see in the Google Doc is Lightcone’s board, which seems telling on a few levels.
Nonprofit boards are tasked with ensuring legal compliance, including compliance with relevant employment law considerations, and including above board practices in unconventional and riskier structures like Nonlinear chose to operate through. This situation looks very different if a legitimate board is in place than if employees don’t have that safeguard.
Though I’m sad about the hurt experienced by many people across the Nonlinear situation, I’m personally less concerned with the minutiae of this particular organization and more about what structures, norms, and safeguards can be established across the EA ecosystem as a whole to reduce risk and protect EA community members going forward. Boards and institutional oversight are a recurring theme, from FTX to Nonlinear (to maybe OpenAI?) and I’m personally more skeptical of any organization that does not make its board information readily apparent.
Thanks for the post! Quick flag for EAIF and EA Funds in general (@calebp?) that I would find it helpful to have the team page of the website up to date, and possibly for those who are comfortable sharing contact information, as Jamie did here, to have it listed in one place.
I actively follow EA Funds content and have been confused many times over the years about who is involved in what capacity and how those who are comfortable with it can be contacted.