To the extent that this post helps me understand what 80,000 Hours will look like in six months or a year, I feel pretty convinced that the new direction is valuable—and I’m even excited about it. But I’m also deeply saddened that 80,000 Hours as I understood it five years ago—or even just yesterday—will no longer exist. I believe that organization should exist and be well-resourced, too.
Like others have noted, I would have much preferred to see this AGI-focused iteration launched as a spinout or sister organization, while preserving even a lean version of the original, big-tent strategy under the 80K banner, and not just through old content remaining online. A multi-cause career advising platform with thirteen years of refinement, SEO authority, community trust, and brand recognition is not something the EA ecosystem can easily replicate. Its exit from the meta EA space leaves a huge gap that newer and smaller projects simply can’t fill in the short term.
I worry that this shift weakens the broader ecosystem, making it harder for promising people to find their path into non-AI cause areas—some of which may be essential to navigating a post-AGI world. Even from within an AGI-focused lens, it’s not obvious that deprioritizing other critical problems is a winning long-term bet.
If transformative AI is just five years away, then we need people who have spent their careers reducing nuclear risks to be doing their most effective work right now—even if they’re not fully bought into AGI timelines. We need biosecurity experts building robust systems to mitigate accidental or deliberate pandemics—whether or not they view that work as directly linked to AI. And if we are truly on the brink of catastrophe, we still need people focused on minimizing human and nonhuman suffering in the time we have left. That’s what made 80K so special: it could meet people where they were, offer intellectually honest cause prioritization, and help them find a high-impact path even if they weren’t ready to work on one specific worldview.
I have no doubt the 80K team approached this change with thoughtfulness and passion for doing the most good. But I hope they’ll reconsider preserving 80K as 80K—a broadly accessible, big ten hub—and launching this new AGI-centered initiative under a distinct name. That way, we could get the best of both worlds: a strong, focused push on helping people work on safely navigating the transition to a world with AGI, without losing one of the EA community’s most trusted entry points.
Thanks for sharing these concerns. These are really hard decisions we face, and I think you’re pointing to some really tricky trade-offs.
We’ve definitely grappled with the question of whether it would make sense to spin up a separate website that focused more on AI. It’s possible that could still be a direction we take at some point.
But the key decision we’re facing is what to do with our existing resources — our staff time, the website we’ve built up, our other programmes and connections. And we’ve been struggling with the fact that the website doesn’t really fully reflect the urgency we believe is warranted around rapidly advancing AI. Whether we launch another site or not, we want to honestly communicate about how we’re thinking about the top problem in the world and how it will affect people’s careers. To do that, we need to make a lot of updates in the direction this post is discussing.
That said, I’ve always really valued the fact that 80k can be useful to people who don’t agree with all our views. If you’re sceptical about AI having a big impact in the next few decades, our content on pandemics, nuclear weapons, factory farming — or our general career advice — can still be really useful. I think that will remain true even with our strategy shift.
I also think this is a really important point:
If transformative AI is just five years away, then we need people who have spent their careers reducing nuclear risks to be doing their most effective work right now—even if they’re not fully bought into AGI timelines. We need biosecurity experts building robust systems to mitigate accidental or deliberate pandemics—whether or not they view that work as directly linked to AI.
I think we’re mostly in agreement here — work on nuclear risks and biorisks remain really important, and last year we made efforts to make sure our bio and nuclear content was more up to date. We recently made an update about mirror bio risks, because they seem especially pressing.
As the post above says: “When deciding what to work on, we’re asking ourselves ‘How much does this work help make AI go better?’, rather than ‘How AI-related is it?’” So to the extent that other work has a key role to play in the risks that surround a world with rapidly advancing AI, it’s clearly in scope of the new strategy.
But I think it probably is helpful for people doing work in areas like nuclear safety and bio to recognise the way short AI timelines could affect their work. So if 80k can communicate that to our audience more clearly, and help people figure out what that means they should do for their careers, it could be really valuable.
And if we are truly on the brink of catastrophe, we still need people focused on minimizing human and nonhuman suffering in the time we have left.
I do think we should be absolutely clear that we agree with this — it’s incredibly valuable that work to minimise existing suffering continues. I support that happening and am incredibly thankful to those who do it. This strategy doesn’t change that a bit. It just means 80k thinks our next marginal efforts are best focused on the risks arising from AI.
On the broader issue of what this means for the rest of the EA ecosystem, I think the risks you describe are real and are important to weigh. One reason we wanted to communicate this strategy publicly is so others could assess it for themselves and better coordinate on their paths forward. And as Conor said, we really wish we didn’t have to live in a world where these issues seem as urgent as they do.
But I think I see the costs of the shift as less stark. We still plan to have our career guide up as a central piece of content, which has been a valuable resource to many people; it explains our views on AI, but also guides people through thinking about cause prioritisation for themselves. And as the post notes, we plan to publish and promote a version of the career guide with a professional publisher in the near future. At the same time, for many years 80k has also made it clear that we prioritise risks from AI as the world’s most pressing problem. So I don’t think I see this as clearly a break from the past as you might.
At the highest level, though, we do face a decision about whether to focus more on AI and the plausibly short timelines to AGI, or to spend time on a wider range of problem areas and take less of a stance on timelines. Focusing more does have the risk that we won’t reach our traditional audience as well, which might even reduce our impact on AI; but declining to focus more has the risk of missing out on other audiences we previously haven’t reached, failing to faithfully communicate our views about the world, and missing out on big opportunities to positively work on what we think is the most pressing problem we face.
As the post notes, while we are committed to making the strategic shift, we’re open to changing our minds if we get important updates about our work. We’ll assess how we’re performing on the new strategy, whether there are any unexpected downsides, and whether developments in the world are matching our expectations. And we definitely continue to be open to feedback from you and others who have a different perspective on the effects 80k is having in the world, and we welcome input about what we can do better.
Minor point, but I’ve seen big tent EA as referring to applying effectiveness techniques on any charity. Then maybe broad current EA causes could be called the middle-sized tent. Then just GCR/longtermism could be called the small tent (which 80k already largely pivoted to years ago, at least considering their impact multipliers). Then just AI could be the very small tent.
(Tangent: “big tent EA” originally referred to encouraging a broad set of views among EAs while ensuring EA is presented as a question, but semantic drift I suppose...)
Thanks! I wasn’t sure the best terminology to use because I would never have described 80K as “cause agnostic” or “cause impartial” and “big tent” or “multi-cause” felt like the closest gesture to what they’ve been.
To the extent that this post helps me understand what 80,000 Hours will look like in six months or a year, I feel pretty convinced that the new direction is valuable—and I’m even excited about it. But I’m also deeply saddened that 80,000 Hours as I understood it five years ago—or even just yesterday—will no longer exist. I believe that organization should exist and be well-resourced, too.
Like others have noted, I would have much preferred to see this AGI-focused iteration launched as a spinout or sister organization, while preserving even a lean version of the original, big-tent strategy under the 80K banner, and not just through old content remaining online. A multi-cause career advising platform with thirteen years of refinement, SEO authority, community trust, and brand recognition is not something the EA ecosystem can easily replicate. Its exit from the meta EA space leaves a huge gap that newer and smaller projects simply can’t fill in the short term.
I worry that this shift weakens the broader ecosystem, making it harder for promising people to find their path into non-AI cause areas—some of which may be essential to navigating a post-AGI world. Even from within an AGI-focused lens, it’s not obvious that deprioritizing other critical problems is a winning long-term bet.
If transformative AI is just five years away, then we need people who have spent their careers reducing nuclear risks to be doing their most effective work right now—even if they’re not fully bought into AGI timelines. We need biosecurity experts building robust systems to mitigate accidental or deliberate pandemics—whether or not they view that work as directly linked to AI. And if we are truly on the brink of catastrophe, we still need people focused on minimizing human and nonhuman suffering in the time we have left. That’s what made 80K so special: it could meet people where they were, offer intellectually honest cause prioritization, and help them find a high-impact path even if they weren’t ready to work on one specific worldview.
I have no doubt the 80K team approached this change with thoughtfulness and passion for doing the most good. But I hope they’ll reconsider preserving 80K as 80K—a broadly accessible, big ten hub—and launching this new AGI-centered initiative under a distinct name. That way, we could get the best of both worlds: a strong, focused push on helping people work on safely navigating the transition to a world with AGI, without losing one of the EA community’s most trusted entry points.
Hey Rocky —
Thanks for sharing these concerns. These are really hard decisions we face, and I think you’re pointing to some really tricky trade-offs.
We’ve definitely grappled with the question of whether it would make sense to spin up a separate website that focused more on AI. It’s possible that could still be a direction we take at some point.
But the key decision we’re facing is what to do with our existing resources — our staff time, the website we’ve built up, our other programmes and connections. And we’ve been struggling with the fact that the website doesn’t really fully reflect the urgency we believe is warranted around rapidly advancing AI. Whether we launch another site or not, we want to honestly communicate about how we’re thinking about the top problem in the world and how it will affect people’s careers. To do that, we need to make a lot of updates in the direction this post is discussing.
That said, I’ve always really valued the fact that 80k can be useful to people who don’t agree with all our views. If you’re sceptical about AI having a big impact in the next few decades, our content on pandemics, nuclear weapons, factory farming — or our general career advice — can still be really useful. I think that will remain true even with our strategy shift.
I also think this is a really important point:
I think we’re mostly in agreement here — work on nuclear risks and biorisks remain really important, and last year we made efforts to make sure our bio and nuclear content was more up to date. We recently made an update about mirror bio risks, because they seem especially pressing.
As the post above says: “When deciding what to work on, we’re asking ourselves ‘How much does this work help make AI go better?’, rather than ‘How AI-related is it?’” So to the extent that other work has a key role to play in the risks that surround a world with rapidly advancing AI, it’s clearly in scope of the new strategy.
But I think it probably is helpful for people doing work in areas like nuclear safety and bio to recognise the way short AI timelines could affect their work. So if 80k can communicate that to our audience more clearly, and help people figure out what that means they should do for their careers, it could be really valuable.
I do think we should be absolutely clear that we agree with this — it’s incredibly valuable that work to minimise existing suffering continues. I support that happening and am incredibly thankful to those who do it. This strategy doesn’t change that a bit. It just means 80k thinks our next marginal efforts are best focused on the risks arising from AI.
On the broader issue of what this means for the rest of the EA ecosystem, I think the risks you describe are real and are important to weigh. One reason we wanted to communicate this strategy publicly is so others could assess it for themselves and better coordinate on their paths forward. And as Conor said, we really wish we didn’t have to live in a world where these issues seem as urgent as they do.
But I think I see the costs of the shift as less stark. We still plan to have our career guide up as a central piece of content, which has been a valuable resource to many people; it explains our views on AI, but also guides people through thinking about cause prioritisation for themselves. And as the post notes, we plan to publish and promote a version of the career guide with a professional publisher in the near future. At the same time, for many years 80k has also made it clear that we prioritise risks from AI as the world’s most pressing problem. So I don’t think I see this as clearly a break from the past as you might.
At the highest level, though, we do face a decision about whether to focus more on AI and the plausibly short timelines to AGI, or to spend time on a wider range of problem areas and take less of a stance on timelines. Focusing more does have the risk that we won’t reach our traditional audience as well, which might even reduce our impact on AI; but declining to focus more has the risk of missing out on other audiences we previously haven’t reached, failing to faithfully communicate our views about the world, and missing out on big opportunities to positively work on what we think is the most pressing problem we face.
As the post notes, while we are committed to making the strategic shift, we’re open to changing our minds if we get important updates about our work. We’ll assess how we’re performing on the new strategy, whether there are any unexpected downsides, and whether developments in the world are matching our expectations. And we definitely continue to be open to feedback from you and others who have a different perspective on the effects 80k is having in the world, and we welcome input about what we can do better.
Minor point, but I’ve seen big tent EA as referring to applying effectiveness techniques on any charity. Then maybe broad current EA causes could be called the middle-sized tent. Then just GCR/longtermism could be called the small tent (which 80k already largely pivoted to years ago, at least considering their impact multipliers). Then just AI could be the very small tent.
(Tangent: “big tent EA” originally referred to encouraging a broad set of views among EAs while ensuring EA is presented as a question, but semantic drift I suppose...)
I was referring to this earlier academic article. I’ve also heard of discussion along a similar vein in the early days of EA.
Thanks! I wasn’t sure the best terminology to use because I would never have described 80K as “cause agnostic” or “cause impartial” and “big tent” or “multi-cause” felt like the closest gesture to what they’ve been.