AI safety + community health
pete
Agreed — really lovely to see.
That seems reasonable, and I appreciate folks’ feedback via agree and disagree votes.
At this point, it’s been more than 24hr, and CEA’s leadership team still hasn’t responded (on the EA forum, which they run!)
I’d like to explore the idea that the CEA leader(s) involved in mishandling this case should step down. The gap between the organization’s stated goals and the choices made here is wide enough to strain imagination. I’d like someone else to have an opportunity to steward community resources and growth who has not made these catastrophic judgement errors.
It’s possible that I am overreacting, but I’m not confident that’s the case. Frances, again, thank you for your courage. Hope that you are safe and well.
I’m a little concerned by the lack of response from org leaders (unless I missed something), and I think there’s a risk that CEA leaders and others might under-update from this.
Taking a role at CEA, then angling for growth and greater stewardship/control of the brand, is a bet that you would be a better force multiplier for the movement than the next candidate. We’re now expected to believe that a leader could fail the test described here and still somehow out-strategize or out-work the next best person.
Kudos to Frances for her moral courage in the face of significant obstacles. And kudos to all the org leaders with way less experience and fewer resources who are sweating out the development of their culture, HR processes, and accountability systems. It feels like invisible work but I see it, and my advisees see it.
With this upgrade, I feel significantly more comfortable referring the site to professionals getting started in AI safety. Great work!
When hiring teams delay decisions for weeks or reject without feedback, they aren’t just reducing their chances of hiring their #1 – they’re increasing the likelihood that their #5, #6, or #7 give up on working in the space at all.
I’ve advised ~hundreds of jobseekers trying to enter similar roles, and can vouch for this author as being both highly capable and, unfortunately, dead on. This is a collective cost paid by both the candidates and the ecosystem, which 1) loses bright people and 2) takes a hit to the broader reputation.
Under very difficult constraints – which many hiring teams are – it still may be that delaying or denying feedback is the right call. But it’s costly, and I’m sorry @AnotherEAJobSeeker that you’ve been put in this situation multiple times.
JD from Christians for Impact has recently been posting about the downside risks of unsuccessful pivots, which reminded me of this post. Thank you for taking the effort to write this up; I’ve shared it with advisors in my network.
Voted! Good luck, guys!
Initially read this as “remember, there are six or more birds,” which I’ll never forget again. A+.
A BOTEC of base rates of moderate-to-severe narcissistic traits (ie, clinical but not necessarily diagnosed) in founders and their estimated costs to the ecosystem. My initial research suggests unusually high concentrations in AI safety relative to other cause areas and the general population.
Just commenting to say I’m independently aware of this lawsuit and find it really promising, and would recommend funders take a closer look and see if it’s a good opportunity for them.
Would love to see more topical reading lists on the Forum.
Proud to be among that 37%. Keep up the excellent work!
Excellent article — and even better title.
Inspiring. Thank you so much for sharing, and for your great gift!
Great job, Rocky and signatories. Statements are not programs, but neither are they nothing. They take a ton of courage and hard work to write. Proud of everyone who engaged in good faith to put this forward and to strengthen EA as a community.
I began seeking counseling and mental health care when my timelines collapsed (shortened by ~20y over the course of a few months). It is like receiving a terminal diagnosis complete with uncertainty and relative isolation of suffering. Antidepressants helped. I am still saving for retirement but spending more freely on quality of life than I have before. I’m also throwing more parties with loved ones, and donating exclusively to x-risk reduction in addition to pivoting my career to AI.
It could be that I love this because it’s what I’m working on (raising safety awareness in corporate governance) but what a great post. Well structured, great summary at the end.
The substance of my concern is how the issue was handled, not the communication delay — although I do think there are 80⁄20 ways to respond quickly without undue legal risk.