AI safety + community health
pete
When hiring teams delay decisions for weeks or reject without feedback, they aren’t just reducing their chances of hiring their #1 – they’re increasing the likelihood that their #5, #6, or #7 give up on working in the space at all.
I’ve advised ~hundreds of jobseekers trying to enter similar roles, and can vouch for this author as being both highly capable and, unfortunately, dead on. This is a collective cost paid by both the candidates and the ecosystem, which 1) loses bright people and 2) takes a hit to the broader reputation.
Under very difficult constraints – which many hiring teams are – it still may be that delaying or denying feedback is the right call. But it’s costly, and I’m sorry @AnotherEAJobSeeker that you’ve been put in this situation multiple times.
JD from Christians for Impact has recently been posting about the downside risks of unsuccessful pivots, which reminded me of this post. Thank you for taking the effort to write this up; I’ve shared it with advisors in my network.
Voted! Good luck, guys!
Initially read this as “remember, there are six or more birds,” which I’ll never forget again. A+.
A BOTEC of base rates of moderate-to-severe narcissistic traits (ie, clinical but not necessarily diagnosed) in founders and their estimated costs to the ecosystem. My initial research suggests unusually high concentrations in AI safety relative to other cause areas and the general population.
Just commenting to say I’m independently aware of this lawsuit and find it really promising, and would recommend funders take a closer look and see if it’s a good opportunity for them.
Would love to see more topical reading lists on the Forum.
Proud to be among that 37%. Keep up the excellent work!
Excellent article — and even better title.
Inspiring. Thank you so much for sharing, and for your great gift!
Great job, Rocky and signatories. Statements are not programs, but neither are they nothing. They take a ton of courage and hard work to write. Proud of everyone who engaged in good faith to put this forward and to strengthen EA as a community.
I began seeking counseling and mental health care when my timelines collapsed (shortened by ~20y over the course of a few months). It is like receiving a terminal diagnosis complete with uncertainty and relative isolation of suffering. Antidepressants helped. I am still saving for retirement but spending more freely on quality of life than I have before. I’m also throwing more parties with loved ones, and donating exclusively to x-risk reduction in addition to pivoting my career to AI.
It could be that I love this because it’s what I’m working on (raising safety awareness in corporate governance) but what a great post. Well structured, great summary at the end.
Unrelated to the broader issue of EA’s lack of demographic diversity, there are several groups for various religions in EA (and other demographic groups / coalitions, like parents). Not sure where to find a centralized list off the top of my head.
Beautiful writing (which I really appreciate, and think we should be more explicit about promoting). I see that AI risk isn’t mentioned here and am curious how that factors into your general sense of the promising future.
We are EAs because we share the experience of being bothered—by suffering, by pandemic risk, by our communities’ failure to prioritize what matters. Before EA, many of us were alone in these frustrations and lacked the support and resources to pursue our dreams of helping. I remember what it was like before EA and I’m never going back. Thank you, each of you, for bringing something beautiful into the world.
Absolutely, profoundly net negative. By EV what I mean is “harm.”
I think this comment helped this post gain attention and made me more likely to engage. Thank you, Markus, for encouraging us to pay attention.
With this upgrade, I feel significantly more comfortable referring the site to professionals getting started in AI safety. Great work!