AI safety + community health
pete
Organizational purpose consultant here. You would not believe the human potential left on the table by orgs that don’t tap into our deeper, non-rational / personal motivations.
Which podcast episode covers EA? That’s cool!
Excited to see the result—think this is a great contribution to the community. I suspect that a lot of us are heading into longtermist work (ex: xrisk reduction) without proper psychological footing and that more support in the movement would go a long way towards reducing burnout...good luck, Bary!
Fantastic essay—one of the most original and challenging I’ve seen on the forum.
I’m interested in this argument not so much as it relates to the value of working on AI alignment, but rather the internal narratives / structures of meaning people carry into their EA work.Many people come to EA from intense or high-control religious backgrounds—evangelicalism, Mormonism, orthodox and ultra-orthodox Judaism, and more. Especially in the kinds of educated circles that tend to overlap with EA, there’s a huge cultural vacuum for shared structures of meaning. I suspect we underestimate the power of this vacuum at our own peril. We’ve got to acknowledge that AI apocalypse narratives (focusing on the narrative here; not the issue itself) have a powerful religious pull. They offer a kind of salvation / destruction binary that offers us what we want most in the world—relief from uncertainty.
I see a young movement with a ton of fervor / new converts and I wonder—are we honest with ourselves about what we’re looking for? Are we being smart about where we get our sense of belonging, meaning, and purpose?
Lots of folks are worried about burnout, and I am too. I see a bunch of brilliant 23 year olds in STEM (and others!) who haven’t had a chance to develop an understanding of their emotional / relational / spiritual needs and are heading for a crash.
My first question after reading What We Owe The Future’s population ethics chapter is “why wouldn’t a weighted system solve the repugnant conclusion AND the sadistic conclusion?” It seemed like such an intuitive solution. Thank you for writing it up in greater detail and clarity!
There are fringe movements (ex: Quiverfull) that focus on procreation as a way of living out God’s will, but very few. What resonates with Christians is a “stewardship” mindset—using our God-given abilities and opportunities wisely. The Bible is full of stories of an otherwise-unspecial person being at the right time and place to make a historically impactful decision.
This reminds me of Peter McIntyre’s work on Non-Trivial, which is a series of interactive lessons that communicate EA principles clearly to young / beginner audiences. You guys should connect.
Thank you for sharing! One thing that could help this feel more readable is to use full words more often in place of acronyms. I was on an EA retreat recently where one person was the designated “acronym police” and would pipe up anytime things got too jargony. Even for people who know the acronyms, challenging ourselves to use them less frequently can make text more easily understood.
Really enjoyed this. Thank you for writing—clearly a labor of love. A personal question that EA is not always equipped to answer is “what do we give our families and communities?” How do we do the most good we can while being deeply rooted, whole people?
One side note: I think the title of this post didn’t cue me in to what it would be about; perhaps a more direct title would draw more interest?
The Future Forum had a much worse version of this, eg: shifting language in their web pitch, missed self-set application review deadlines by 4-6 weeks, and then denying an unknown, suspected large, % of otherwise impressive applicants in relevant fields. I mention this here because, due to Future Forum’s proximity in date to EAG SF, the uncertainty around FF acceptance led me to delay travel arrangements, cancel pre-EAG meetings, and nearly cancel the trip to SF entirely.
Another impact was that two high-achieving colleagues on the cusp of joining EA came to believe that FF used its nomination form like a multi-level marketing ploy to “tell us who you know” and had no intention of sincerely evaluating most applications. I don’t share this view but wanted to share a case study of how tone shifts / missed comms deadlines during event applications can lead to the worst possible thing being assumed.
I’d describe myself as a person that could probably work in policyish things and have been put off by the stereotype of DC—this post nudged me to reconsider. Thank you for sharing!
Agreed (with Zach). I found it to be much milder than described, and not surprising (Elon’s tweet, for example, wasn’t going to go unnoticed by press). The author makes similar statements as what I’m hearing from global health / suffering-focused EA friends. It’s a fair take and a natural part of the discourse as EA gets more attention—unlike the Wall Street Journal opinion piece, which was unhinged garbage. We have to take a long term view of the public discourse surrounding EA—a thoughtful response could be valuable, but not feeling the same level of urgency compared to other things (ex: reducing future reputation risks).
Very interested in this topic, thank you for posting!
There’s a hunger in EA for personal stories—what life is like outside of forum posts for people doing the work, getting grants, being human. Thank you for sharing.
(Note: personal feelings below, very proud of / keen to support your work)
I’m struck by how differently I felt reading about this funding example, coming from my circumstances. I work in private sector with job stability and hope to build a family. The thought of existing on 6-month grants / frequently changing locations, is scary to me. Health insurance (US), planning financial future, kids, etc. I’ve spoken to many EAs who are in a way more transient living situation than I could handle. Suspect that’s true for many, but not all, mid-career folks.
Very cool. Thank you, Felix!
Strong upvote. Great post.
I like this take. Seems like this fits in a broader discussion of how rigorously we should try to line up actions with principles, ex: going vegan, not flying for climate, or more extreme things like going zero waste
Fantastic post—look forward to sharing it with others in the future!
One note: is it possible to update the designers summary? “Making things look pretty”may not communicate the value of their work, which is often highly nuanced and strategic.
“This notion of effective altruism doesn’t demand that you use all your resources to help others. It doesn’t even say that you should use your other-focused budget of resources to help others as much as possible. 2 Instead, it merely describes an intellectual project (clause i) and a practical project (clause ii) that some people are excited about but most people aren’t. 3”
Considering EA as a project is such a beautiful way to reduce scope creep and the stress that comes with it. It allows us clearly say “this resource in my life will / will not be applied to EA.” Thank you for sharing, Aryeh!