Thanks for sharing this update. I appreciate the transparency and your engagement with the broader community!
I have a few questions about this strategic pivot:
On organizational structure: Did you consider alternative models that would preserve 80,000 Hours’ established reputation as a more “neutral” career advisor while pursuing this AI-focused direction? For example, creating a separate brand or group dedicated to AI careers while maintaining the broader 80K platform for other cause areas? This might help avoid potential confusion where users encounter both your legacy content presenting multiple cause areas and your new AI-centric approach.
On the EA pathway: I’m curious about how this shift might affect the “EA funnel”—where people typically enter effective altruism through more intuitive cause areas like global health or animal welfare before gradually engaging with longtermist ideas like AI safety. By positioning 80,000 Hours primarily as an AI-focused organization, are you concerned this might make it harder for newcomers to find their way into the community if AI risk arguments initially seem abstract or speculative to them?
On reputational considerations: Have you weighed the potential reputational risks if AI development follows a more moderate trajectory than anticipated? If we see AI plateau at impressive but clearly non-transformative capabilities, this strategic all-in approach could affect 80,000 Hours’ credibility for years to come. The past decade of 80K’s work as a cause-diverse advisor has created tremendous value—might a spinoff organization for AI-specific work better preserve that accumulated trust while still allowing you to pursue what you see as the highest-impact path?
I’m not the expert on effective altruism. I don’t identify with that terminology. My impression is that it’s a bit of an outdated term.