AI safety + community health
pete
The Future Forum had a much worse version of this, eg: shifting language in their web pitch, missed self-set application review deadlines by 4-6 weeks, and then denying an unknown, suspected large, % of otherwise impressive applicants in relevant fields. I mention this here because, due to Future Forum’s proximity in date to EAG SF, the uncertainty around FF acceptance led me to delay travel arrangements, cancel pre-EAG meetings, and nearly cancel the trip to SF entirely.
Another impact was that two high-achieving colleagues on the cusp of joining EA came to believe that FF used its nomination form like a multi-level marketing ploy to “tell us who you know” and had no intention of sincerely evaluating most applications. I don’t share this view but wanted to share a case study of how tone shifts / missed comms deadlines during event applications can lead to the worst possible thing being assumed.
It’s also important to note that with many (I think 86% at last survey) EAs being nonreligious, it can be relatively easy for EA to play that role in people’s lives. The cultural template is there, and it’s powerful.
I like the sentiment but disagree. We have to know how far this goes. This is a system failure, not just an individual failure.
You make good points here. EA can’t scale without hard conversations about resources—and we also can’t scale if every financial decision is also a cause prioritization decision AND an assessment of our own expected future contribution. That’s impossible and nuts.
Let’s borrow from the private sector some good heuristics about money (transportation rules of thumb, ex: coach for flights under 5hrs and business class for 5hr+, or literally anything like that). Get smart finance people, have them set policies, and then let the rest of us stay accountable but otherwise not think about it.
Replying to this because I don’t think this is a rare view, and I’m concerned about it. Met someone this week who seemed to openly view cults as a template (flawed, but useful) and was in the process of building a large compound overseas where he could disseminate his beliefs to followers who lived onsite. By his own admission, he was using EA as a platform to launch multiple(?) new genuine religions.
In light of the Leverage Research incident, we should expect and keep an eye out for folks using the EA umbrella to actually start cults.
We are EAs because we share the experience of being bothered—by suffering, by pandemic risk, by our communities’ failure to prioritize what matters. Before EA, many of us were alone in these frustrations and lacked the support and resources to pursue our dreams of helping. I remember what it was like before EA and I’m never going back. Thank you, each of you, for bringing something beautiful into the world.
Absolutely, profoundly net negative. By EV what I mean is “harm.”
Thank you for sharing! One thing that could help this feel more readable is to use full words more often in place of acronyms. I was on an EA retreat recently where one person was the designated “acronym police” and would pipe up anytime things got too jargony. Even for people who know the acronyms, challenging ourselves to use them less frequently can make text more easily understood.
Techno pessimism is trendy, snark is trendy, climate doomerism is trendy. There’s also a time honored tradition of using newspaper comment boards specifically for complaining, which predates digital papers (see: letters to the editor). I’d be hard pressed to find anything that moves far off that base rate.
That said, I expected 60⁄40 positive to negative. This is a super helpful way to see sentiment at a glance and I’d be so excited to see a compilation extended across other articles (ex: the negative Salon piece) and maintained over time.
It could be that I love this because it’s what I’m working on (raising safety awareness in corporate governance) but what a great post. Well structured, great summary at the end.
Agreed (with Zach). I found it to be much milder than described, and not surprising (Elon’s tweet, for example, wasn’t going to go unnoticed by press). The author makes similar statements as what I’m hearing from global health / suffering-focused EA friends. It’s a fair take and a natural part of the discourse as EA gets more attention—unlike the Wall Street Journal opinion piece, which was unhinged garbage. We have to take a long term view of the public discourse surrounding EA—a thoughtful response could be valuable, but not feeling the same level of urgency compared to other things (ex: reducing future reputation risks).
Beautiful writing (which I really appreciate, and think we should be more explicit about promoting). I see that AI risk isn’t mentioned here and am curious how that factors into your general sense of the promising future.
I cleared my calendar yesterday just to grieve. I relived losing family to the pandemic, thinking of all the resources lost for pandemic prevention.
It’s true that there’s work to be done, but grief is heavy, and if we don’t deal with it honestly we’ll pay dividends later.
Love seeing this type of thinking on the Forum. Thanks for writing.
Phenomenal post. Nice categorization, super clear and compelling.
Great post. EA is a mixed blessing for the many folks who tend toward anxiety. There’s always something more to be thinking about, always a career plan to sharpen, always more to donate. Guideposts like this one can help folks manage the firehose and make EA healthier & more sustainable than other social impact communities. I’m thinking of 2017ish Climate Twitter, which kept saying something like “to exist as a person is a crime against the planet.”
One question : “a person’s right to be alive is not tied to their past or future altruistic impact.” I agree with that statement but haven’t heard much of a philosophical justification for it. Do you know where I could learn more about the idea of intrinsic worth and how it relates to consequentialist morality?
Sent this to a friend building a career plan immediately. Fantastic post.
Update: Friend said “Wow, awesome article. That really was comforting.”
Excellent initiative—this increases my confidence that we can create organizations that truly scale and achieve significant impact
Strong upvote—I found your perspective really fresh:
”The most likely case to me is that if AI x-risk is solved or turns out not to be a serious issue, and we just keep facing x-risks in proportion to how strong our technology gets, forever. Eventually we draw a black ball and all die.”Lots of us are considering a career pivot into AI safety. Is it...actually tractable at all? How hopeful should we be about it? No idea.
There’s a hunger in EA for personal stories—what life is like outside of forum posts for people doing the work, getting grants, being human. Thank you for sharing.
(Note: personal feelings below, very proud of / keen to support your work)
I’m struck by how differently I felt reading about this funding example, coming from my circumstances. I work in private sector with job stability and hope to build a family. The thought of existing on 6-month grants / frequently changing locations, is scary to me. Health insurance (US), planning financial future, kids, etc. I’ve spoken to many EAs who are in a way more transient living situation than I could handle. Suspect that’s true for many, but not all, mid-career folks.