Rockwell
To the extent that this post helps me understand what 80,000 Hours will look like in six months or a year, I feel pretty convinced that the new direction is valuable—and I’m even excited about it. But I’m also deeply saddened that 80,000 Hours as I understood it five years ago—or even just yesterday—will no longer exist. I believe that organization should exist and be well-resourced, too.
Like others have noted, I would have much preferred to see this AGI-focused iteration launched as a spinout or sister organization, while preserving even a lean version of the original, big-tent strategy under the 80K banner, and not just through old content remaining online. A multi-cause career advising platform with thirteen years of refinement, SEO authority, community trust, and brand recognition is not something the EA ecosystem can easily replicate. Its exit from the meta EA space leaves a huge gap that newer and smaller projects simply can’t fill in the short term.
I worry that this shift weakens the broader ecosystem, making it harder for promising people to find their path into non-AI cause areas—some of which may be essential to navigating a post-AGI world. Even from within an AGI-focused lens, it’s not obvious that deprioritizing other critical problems is a winning long-term bet.
If transformative AI is just five years away, then we need people who have spent their careers reducing nuclear risks to be doing their most effective work right now—even if they’re not fully bought into AGI timelines. We need biosecurity experts building robust systems to mitigate accidental or deliberate pandemics—whether or not they view that work as directly linked to AI. And if we are truly on the brink of catastrophe, we still need people focused on minimizing human and nonhuman suffering in the time we have left. That’s what made 80K so special: it could meet people where they were, offer intellectually honest cause prioritization, and help them find a high-impact path even if they weren’t ready to work on one specific worldview.I have no doubt the 80K team approached this change with thoughtfulness and passion for doing the most good. But I hope they’ll reconsider preserving 80K as 80K—a broadly accessible, big ten hub—and launching this new AGI-centered initiative under a distinct name. That way, we could get the best of both worlds: a strong, focused push on helping people work on safely navigating the transition to a world with AGI, without losing one of the EA community’s most trusted entry points.
Exciting! Am I right in understanding that Forethought Foundation for Global Priorities Research is no longer operational?
Room Full of Strangers
That makes sense! My best guess is that this is an evolving situation many in the community are paying attention to but that those more in the weeds are part of larger, non-EA-specific discussion channels, given the scope of the entities involved and the larger global response. But I could be off the mark here. I base this largely on my own experience following this closely but not particularly having anything to say on e.g. the Forum about it.
I disagree with the implication that those focused on other cause areas would actively downvote a post, rather than just not engage. I haven’t seen evidence of people downvoting posts for focusing on other cause areas and I worry it spreads undue animosity to imply otherwise.
I won’t claim it is sufficient to the urgency of the current funding cuts, but there have been many posts, quick takes, and comments in the past few weeks about this issue, including one four days ago already announcing The Rapid Response Fund with 90 upvotes at time of writing.
My primary advice is to avoid rushing to any judgements. The criticism came out yesterday and neither organization was aware of it in advance. I assume Sinergia and/or ACE will respond, but it makes sense that that might take at least several days.
Thanks for the post! Quick flag for EAIF and EA Funds in general (@calebp?) that I would find it helpful to have the team page of the website up to date, and possibly for those who are comfortable sharing contact information, as Jamie did here, to have it listed in one place.
I actively follow EA Funds content and have been confused many times over the years about who is involved in what capacity and how those who are comfortable with it can be contacted.
“There seems to be movement towards animal welfare interventions and away from global health interventions.”
What is this based on? I don’t believe this tracks with e.g. distribution of EA-associated donations.
The application deadline has been extended and now closes on July 28 at 11:59 pm ET.
Effective Altruism NYC is Hiring an Executive Director
My best guess there is also a large U.S./EU difference here.
I do think you need to differentiate the Bay Area from the rest of the US, or at least from the US East Coast.
This seems like it has significant implications for the trajectory of global geopolitical stability and some GCR scenarios. I’m wondering whether or not others following this who are better informed than I am see this as a notable update.
[Link Post] Russia and North Korea sign partnership deal that appears to be the strongest since the Cold War
Oh, it sounds like you might be confused about the context I’m talking about this occurring in, and I’m not sure that explaining it more fully is on-topic enough for this post. I’m going to leave this thread here for now to not detract from the main conversation. But I’ll consider making a separate post about this and welcome feedback there.
I do also want to clarify that I have no desire to “control which ideas and people [anyone] is exposed to.” It is more so, “If I am recommending 3 organizations I think someone should connect with, are there benefits or risks tied to those recommendations.”
I really appreciate you sharing your perspective on this. I think these are extremely hard calls, as evidenced by the polarity of the discussion on this post, and to some extent it feels like a lose-lose situation. I don’t think these decisions should be made in a vacuum and want other people’s input, which is one reason I’m flagging how this affects my work and the larger involvement funnels in EA.
Thanks for spelling this out.
I think to give some color to how this affects my work in particular (speaking strictly for myself as I haven’t discussed this with others on my team):
One of our organization priorities is ensuring we are creating a welcoming and hopefully safe community for people to do good better, regardless of people’s identities. A large part of our work is familiarizing people with and connecting them to organizations and resources, including ones that aren’t explicitly EA-branded. We are often one of their first touch points within EA and its niches, including forecasting. Many factors shape whether people decide to continue and deepen their involvement, including how positive they find these early touch points. When we’re routing people toward organizations and individuals, we know that their perception of our recommendations in turn affects their perception of us and of EA as a whole.
Good, smart, ambitious people usually have several options for professional communities to spend their time within. EA and its subcommunities are just one option and an off-putting experience can mean losing people for good.
With this in mind, I will feel much more reluctant to direct community members to Manifold in particular and (EA-adjacent) forecasting spaces more broadly, especially if the community member is an underrepresented group in EA. I think Manifold brings a lot of value, but I can’t in good conscience recommend they plug into communities I believe most people I am advising would find notably morally off-putting.
This is of course a subjective judgement call, I understand there are strong counterarguments here, and what repels one person also attracts another. But I hope this gives a greater sense of the considerations/trade-offs I (and probably many others) will have to spend time thinking about and reaching decisions around as a result of Manifest.
Austin, I’m gathering there might be a significant and cruxy divergence in how you conceptualize Manifold’s position in and influence on the EA community and how others in the community conceptualize this. Some of the core disagreements discussed here are relevant regardless, but it might help clarify the conversation if you describe your perspective on this.
Commenting just to encourage you to make this its own post. I haven’t seen a (recent) standalone post about this topic, it seems important, and though I imagine many people are following this comment section it also seems easy for this discussion to get lost and for people with relevant opinions to miss it/not engage because it’s off-topic.
Thanks! I wasn’t sure the best terminology to use because I would never have described 80K as “cause agnostic” or “cause impartial” and “big tent” or “multi-cause” felt like the closest gesture to what they’ve been.