Rockwell
You might be interested in the information security sphere, which some in the EA community focus on, especially in the context of safe AI. This 80,000 Hours podcast from 2022 is a good overview: https://80000hours.org/podcast/episodes/nova-dassarma-information-security-and-ai-systems/
I’m really excited to see this! Thank you for all the work you’ve put into it!
One piece of feedback (that others in the actual target audience are probably better to weigh in on): I noticed the people page currently seems to exclusively feature white Americans and Europeans, which I imagine might be offputting to some people the website is designed to reach.
Announcement on the Future of EA NYC’s Dim Sum Restaurant
I’d like to second Ben and make explicit the concern about platforming ideologues whose public reputation is seen as pro-eugenics.
[Question] Orgs doing EA-aligned work but not on EA community’s radar?
The best option for you will be very individual! I recommend taking the Animal Justice Academy course for an overview of types of advocacy in the animal movement and the Animal Advocacy Careers course if you are considering a career transition (now or in the future). If you are in/near the US, the AVA Summit in May is also a great way to dive in.
Crowdsourced Overview of Funding for Regional Community Building Orgs
Without commenting on the rest of this case or EA Funds more broadly, this stood out to me:
At the EA funds website, they write that they usually grant money within 21 days from sending an application, and that their managers care (no further specification).
I was surprised the OP would request a response within one month when applying for a grant until I saw this truly is emphasized on the EA Funds site. This seems inconsistent with my understanding of many people’s experiences with EA Funds and easy messaging to change to set more realistic expectations. I appreciate EA funders’ efforts toward quick turnaround times, but traditional funders typically take many months to reach a decision, even for comparably sized (i.e. small) grants. This seems like a strong case for “underpromise, overdeliver.”
I think that’s a common intuition! I’m curious if there were particular areas covered (or omitted) from this post that you see as more clearly the natural function of one versus the other.
I’ll note that a couple factors seem to blur the lines between city and national MEARO functions:-Size of region (e.g. NYC’s population is about 8 million, Norway’s is about 5.5 million)
-Composition of MEAROs in the area (e.g. many national MEAROs end up with a home base city or grew out of a city MEARO, some city MEAROs are in countries without a national MEARO)
I could see this looking very different if more resources went toward assessing and intentionally developing the global MEARO landscape in years to come.
Meta EA Regional Organizations (MEAROs): An Introduction
Thank you for writing and sharing this, Alix! I’m sorry that it was scary for you to post and I’m glad you did. You also linked to so many other useful readings I hadn’t seen previously!
I’m wondering how these dynamics play out across different platforms and spaces—e.g. hiring processes for organizations with varying degrees of international staff vs. international online platforms like the Forum or EA Anywhere Slack vs. in-person events—and if there are better moderation mechanisms for acknowledging and accounting for language barriers across each. Online, for example, it’s easy to list the languages you speak and some organizations list this on their staff pages (e.g. “You can contact Alix in French and English.”). Maybe this could be added to Forum profiles or EA Global Swapcard profiles.
I’m also wondering how we can better account for this as community builders, especially in places with many immigrants. We remind attendees at the start of most EA NYC events that everyone present has a different starting point and we all have something to learn and something to teach. We began doing this, in large part, to make sure newcomers who don’t “speak EA” feel welcome. But there might be a benefit to also explicitly noting possible language barriers, given how deeply international and multicultural the community here is. This is also making me want to look into facilitation trainings specifically focused on these dynamics; I’m sure there are non-obvious things we could be doing better.
@abrahamrowe, I’m curious if you have insights on the larger point about good governance across the EA ecosystem. As evidenced by EV’s planned disbanding, sponsorship arrangements have a higher potential to become fraught. The opacity of the relationship between Rethink Charity and Nonlinear might be another example. (I.e. This is further indication Nonlinear employees wouldn’t have had the same protection and recourse mechanisms as employees of more conventionally governed 501c3s, especially those of established 501c3s sizeable enough to hire 21 staff members.) Given RP is growing into one of the larger fiscal sponsors through your Special Projects Team, it might be worth further commentary from the RP team on how you’re navigating risk and responsibility in sponsorship arrangements. Given RP’s track record of proactive risk mitigation, I imagine you all have given this ample thought and it might serve as a template for others.
Once again, where is the board?
Two of the biggest questions for me are whether or not Nonlinear had a board of directors when Alice and Chloe worked for them and, if they did, whether an employee would know the identities and contact information of the board members and could feel reasonably safe approaching board members to express concerns and seek intervention. I can’t find evidence they had a board at the time of the complaints or do now a year and a half after Alice and Chloe stopped working with them. The only reference to a board of directors I see in the Google Doc is Lightcone’s board, which seems telling on a few levels.
Nonprofit boards are tasked with ensuring legal compliance, including compliance with relevant employment law considerations, and including above board practices in unconventional and riskier structures like Nonlinear chose to operate through. This situation looks very different if a legitimate board is in place than if employees don’t have that safeguard.
Though I’m sad about the hurt experienced by many people across the Nonlinear situation, I’m personally less concerned with the minutiae of this particular organization and more about what structures, norms, and safeguards can be established across the EA ecosystem as a whole to reduce risk and protect EA community members going forward. Boards and institutional oversight are a recurring theme, from FTX to Nonlinear (to maybe OpenAI?) and I’m personally more skeptical of any organization that does not make its board information readily apparent.
Makes sense, thank you! Maybe my follow-up questions would be: How confident would they need to be that they’d use the experience to work on biorisk vs. global health before applying to the LTFF? And if they were, say, 75:25 between the two, would EAIF become the right choice—or what ratio would bring this grant into EAIF territory?
Scattered first impressions:
I feel generally very positively about this update and have personally felt confused about the scope of EAIF when referring other people to it.
There are wide grey areas when attempting to delineate principles-first EA from cause-specific EA and the effective giving examples in this post stand out to me as one thorny area. I think it may make sense not to fund an AI-specific or an animal-specific effective giving project through EAIF (and the LTTF and AWF are more appropriate), but an effective giving project that e.g. takes a longtermist approach or is focused on near-term human and nonhuman welfare seems different to me. Put differently: How do you think about projects that don’t cover all of EA, but also aren’t limited to one cause area?
For this out-of-scope example in particular, I’m not sure where I would route someone to pursue alternative funding in a timely fashion:
Funding a very promising biology PhD student to attend a one-month program run by a prestigious US think tank to understand better how the intelligence community monitors various kinds of risk, such as biological threats ($6,000)
Maybe Lightspeed? But I worry there isn’t currently other coverage for funding needs of this sort.
I’m worried about people couching cause-specific projects as principles-first, but there is already a heavy tide pushing people to couch principles-first projects as x-risk-specific, so this might not be a concern.
I’m really happy to see you thinking about digital minds and (seemingly) how to grow s-risk projects.
Thanks for asking this and clearly giving the issue thought and care!
In short, the lives of “pasture-raised” hens are still, in my opinion, one of the worst existences imaginable, even in the best of cases. Speaking for the US:The hens have parents and their parents’ conditions are not captured by the certification.
The hens should have had brothers, but they are killed the day they hatch, typically by being ground alive while fully conscious.
The above is true across the board because the same hatcheries are used across the industry.
Pasture-raised is not a term regulated by the USDA. Egg companies have been consistently called out and sued for their false advertising.
The few farms that do actually give hens access to the outdoors still face a plethora of environmental conditions that harm their welfare: exposure to parasites, exposure to disease (including Highly Pathogenic Avian Influenza, which has killed over 17 million birds since the current outbreak began), exposure to extreme elements, and exposure to predation.
Hens on farms with exposure to viruses like Avian Influenza are killed in horrific ways, often by “ventilation shutdown” through which they’re baked alive.
Regardless of where they live, the hens are all the same intensively bred breeds and they all experience a plethora of excruciating and lethal health conditions caused by laying eggs at an extremely unnatural rate: the highest known rate of ovarian cancer of any species, prolapses where their reproductive tract literally falls out of their bodies, impactions from egg material they couldn’t push out, sepsis from that egg material. I wrote about this in more depth on HuffPost years ago.
No individualized care, prolonging their suffering.
They are killed when they are 18 to 24 months old. Their undomesticated ancestors can live 30 years. With appropriate care, they can live a decade.
They are killed in horrific ways, but you probably already know that part.
This is just a brief overview. Anecdotally, some of the worst conditions I’ve seen were on “pasture-raised” farms.
In your shoes, I would consult with a veg-friendly nutritionist to come up with an individualized diet plan that will be sustainable for you, meet your health needs, and align with your ethics.
I find pieces like this frustrating because I don’t think EA ever “used to be” one thing. Ten people who previously felt more at home in EA than they currently do will describe ten different things EA “used to be” that it no longer is, often in direct conflict with the other nine’s narratives. I’d much prefer people to say, “Here’s a pattern I’m noticing, I think it is likely bad for these reasons, and I think it wasn’t the case x years ago. I would like to see x treated as a norm.”
Applications Open: EA Career Weekend (December 16 & 17, NYC)
Thank you for the update and all of the work you’re putting into these events. I know you’re likely busy with EAG Boston, but a few questions when you have the time:
1. Is the decision to run an east coast EAG in 2024 primarily about cost? And if an east coast EAG does happen in 2024, will it definitely be in Boston vs. DC or a cheaper city?
2. If you had 2x or 3x the budget for EAGs, do you think you would organize a cause-neutral EAG in the Bay Area in addition to a GCR conference? How would more funding affect cause-specific vs. big-tent event planning?3. Do you envision content focused on digital sentience and s-risks at the GCR conference? I’m personally worried that AI risk and biorisk are reducing the airtime for other risks (nuclear war, volcanoes, etc.), including suffering risks. Likewise, I’d still love to see GCR-oriented content focused on topics like how climate change might accelerate certain GCRs, the effects of GCRs on the global poor, the effects of GCRs on nonhuman animals, etc.
(Also, I hope all EAG events remain fully vegan, regardless of the cause area content!)
Commenting just to encourage you to make this its own post. I haven’t seen a (recent) standalone post about this topic, it seems important, and though I imagine many people are following this comment section it also seems easy for this discussion to get lost and for people with relevant opinions to miss it/not engage because it’s off-topic.