Thanks for sharing this update. I appreciate the transparency and your engagement with the broader community!
I have a few questions about this strategic pivot:
On organizational structure: Did you consider alternative models that would preserve 80,000 Hours’ established reputation as a more “neutral” career advisor while pursuing this AI-focused direction? For example, creating a separate brand or group dedicated to AI careers while maintaining the broader 80K platform for other cause areas? This might help avoid potential confusion where users encounter both your legacy content presenting multiple cause areas and your new AI-centric approach.
On the EA pathway: I’m curious about how this shift might affect the “EA funnel”—where people typically enter effective altruism through more intuitive cause areas like global health or animal welfare before gradually engaging with longtermist ideas like AI safety. By positioning 80,000 Hours primarily as an AI-focused organization, are you concerned this might make it harder for newcomers to find their way into the community if AI risk arguments initially seem abstract or speculative to them?
On reputational considerations: Have you weighed the potential reputational risks if AI development follows a more moderate trajectory than anticipated? If we see AI plateau at impressive but clearly non-transformative capabilities, this strategic all-in approach could affect 80,000 Hours’ credibility for years to come. The past decade of 80K’s work as a cause-diverse advisor has created tremendous value—might a spinoff organization for AI-specific work better preserve that accumulated trust while still allowing you to pursue what you see as the highest-impact path?
Have you weighed the potential reputational risks if AI development follows a more moderate trajectory than anticipated? If we see AI plateau at impressive but clearly non-transformative capabilities, this strategic all-in approach could affect 80,000 Hours’ credibility for years to come.
I feel like this argument has been implicitly holding back a lot of EA focus on AI (for better or worse), so thanks for putting it so clearly. I always wonder about the asymmetry of it: what about the reputational benefits that accrue to 80K/EA for correctly calling the biggest cause ever? (If they’re correct)
One question for us is whether we want to create a separate website (“10,000 Hours?”), that we cross-promote from the 80k website, or to change the 80k website a bunch to front the new AI content. That’s something we’re still thinking about, though I am currently weakly leaning toward the latter (more on why below). But we’re not currently thinking about making an entire new organisation.
Why not?
For one thing, it’d be a lot of work and time, and we feel this shift is urgent.
Primarily, though, 80,000 Hours is a cause-impartial organisation, and we think that means prioritising the issues we think are most pressing (& telling our audience about why we think that.)
What would be the reason for keeping one 80k site instead of making a 2nd separate one?
As I wrote to Zach above, I think the site currently doesn’t represent the possibility of short timelines or the variety of risks AI poses well, even though it claims to be telling people key information they need to know to have a high impact career. I think that’s key information, so want it to be included very prominently.
As a commenter noted below, it’d take time and work to build up an audience for the new site.
But I’m not sure! As you say, there are reasons to make a separate site as well.
On EA pathways: I think Chana covered this well – it’s possible this will shrink the number of people getting into EA ways of thinking, but it’s not obvious. AI risk doesn’t feel so abstract anymore.
On reputation: this is a worry. We do plan to express uncertainty about whether AGI will indeed progress as quickly as we worry it will, and that if people pursue a route to impact that depends on fast AI timelines, that’s making a bet that might not pay off. However, we think it’s important both for us & for our audience to act under uncertainty, using rules of thumb but also thinking about expected impact.
In other words – yes, our reputation might suffer from this if AI progresses slowly. If that happens, it will probably be worse for our impact, but better for the world, and I think I’ll still feel good about expressing our (uncertain) views on this matter when we had them.
I think others at 80k are best placed to answer this (for time zone reasons I’m most active in this thread right now), but for what it’s worth, I’m worried about the loss at the top of the EA funnel! I think it’s worth it overall, but I think this is definitely a hit.
That said, I’m not sure AI risk has to be abstract or speculative! AI is everywhere, I think feels very real to some people, can feel realer than others, and the problems we’re encountering are rapidly less speculative (we have papers showing at least some amount of alignment faking, scheming, obfuscation of chain of thought, reward hacking, all that stuff!)
One question I have is how much it will be the case in the future that people looking for a general “doing good” framework will in fact bounce off of the new 80k. For instance, it could be the case that AI is so ubiquitous that it would feel totally out of touch to not be discussing it a lot. More compellingly to me, I think it’s 80k’s job to make the connection; doing good in the current world requires taking AI and its capabilities and risks seriously. We are in an age of AI, and that has implications for all possible routes to doing good.
I like your take on reputation considerations; I think lots of us will definitely have to eat non-zero crow if things really plateau, but I think the evidence is strong enough to care deeply about this and prioritize it, and I don’t want to obscure that we believe that for the reputational benefit.
From a practical point of view, if all the traffic and search/other reputation is to 80k website, and the timelines are perceived to be short, I could imagine it makes sense to the team to directly adjust the focus of the website rather than take the years to build up a separate, additional brand.
Makes sense. Just want to flag that tensions like these emerge because 80K is simultaneously a core part of the movement and also an independent organization with its goals and priorities.
Thanks for sharing this update. I appreciate the transparency and your engagement with the broader community!
I have a few questions about this strategic pivot:
On organizational structure: Did you consider alternative models that would preserve 80,000 Hours’ established reputation as a more “neutral” career advisor while pursuing this AI-focused direction? For example, creating a separate brand or group dedicated to AI careers while maintaining the broader 80K platform for other cause areas? This might help avoid potential confusion where users encounter both your legacy content presenting multiple cause areas and your new AI-centric approach.
On the EA pathway: I’m curious about how this shift might affect the “EA funnel”—where people typically enter effective altruism through more intuitive cause areas like global health or animal welfare before gradually engaging with longtermist ideas like AI safety. By positioning 80,000 Hours primarily as an AI-focused organization, are you concerned this might make it harder for newcomers to find their way into the community if AI risk arguments initially seem abstract or speculative to them?
On reputational considerations: Have you weighed the potential reputational risks if AI development follows a more moderate trajectory than anticipated? If we see AI plateau at impressive but clearly non-transformative capabilities, this strategic all-in approach could affect 80,000 Hours’ credibility for years to come. The past decade of 80K’s work as a cause-diverse advisor has created tremendous value—might a spinoff organization for AI-specific work better preserve that accumulated trust while still allowing you to pursue what you see as the highest-impact path?
I feel like this argument has been implicitly holding back a lot of EA focus on AI (for better or worse), so thanks for putting it so clearly. I always wonder about the asymmetry of it: what about the reputational benefits that accrue to 80K/EA for correctly calling the biggest cause ever? (If they’re correct)
Hi Håkon, Arden from 80k here.
Great questions.
On org structure:
One question for us is whether we want to create a separate website (“10,000 Hours?”), that we cross-promote from the 80k website, or to change the 80k website a bunch to front the new AI content. That’s something we’re still thinking about, though I am currently weakly leaning toward the latter (more on why below). But we’re not currently thinking about making an entire new organisation.
Why not?
For one thing, it’d be a lot of work and time, and we feel this shift is urgent.
Primarily, though, 80,000 Hours is a cause-impartial organisation, and we think that means prioritising the issues we think are most pressing (& telling our audience about why we think that.)
What would be the reason for keeping one 80k site instead of making a 2nd separate one?
As I wrote to Zach above, I think the site currently doesn’t represent the possibility of short timelines or the variety of risks AI poses well, even though it claims to be telling people key information they need to know to have a high impact career. I think that’s key information, so want it to be included very prominently.
As a commenter noted below, it’d take time and work to build up an audience for the new site.
But I’m not sure! As you say, there are reasons to make a separate site as well.
On EA pathways: I think Chana covered this well – it’s possible this will shrink the number of people getting into EA ways of thinking, but it’s not obvious. AI risk doesn’t feel so abstract anymore.
On reputation: this is a worry. We do plan to express uncertainty about whether AGI will indeed progress as quickly as we worry it will, and that if people pursue a route to impact that depends on fast AI timelines, that’s making a bet that might not pay off. However, we think it’s important both for us & for our audience to act under uncertainty, using rules of thumb but also thinking about expected impact.
In other words – yes, our reputation might suffer from this if AI progresses slowly. If that happens, it will probably be worse for our impact, but better for the world, and I think I’ll still feel good about expressing our (uncertain) views on this matter when we had them.
I think others at 80k are best placed to answer this (for time zone reasons I’m most active in this thread right now), but for what it’s worth, I’m worried about the loss at the top of the EA funnel! I think it’s worth it overall, but I think this is definitely a hit.
That said, I’m not sure AI risk has to be abstract or speculative! AI is everywhere, I think feels very real to some people, can feel realer than others, and the problems we’re encountering are rapidly less speculative (we have papers showing at least some amount of alignment faking, scheming, obfuscation of chain of thought, reward hacking, all that stuff!)
One question I have is how much it will be the case in the future that people looking for a general “doing good” framework will in fact bounce off of the new 80k. For instance, it could be the case that AI is so ubiquitous that it would feel totally out of touch to not be discussing it a lot. More compellingly to me, I think it’s 80k’s job to make the connection; doing good in the current world requires taking AI and its capabilities and risks seriously. We are in an age of AI, and that has implications for all possible routes to doing good.
I like your take on reputation considerations; I think lots of us will definitely have to eat non-zero crow if things really plateau, but I think the evidence is strong enough to care deeply about this and prioritize it, and I don’t want to obscure that we believe that for the reputational benefit.
From a practical point of view, if all the traffic and search/other reputation is to 80k website, and the timelines are perceived to be short, I could imagine it makes sense to the team to directly adjust the focus of the website rather than take the years to build up a separate, additional brand.
Makes sense. Just want to flag that tensions like these emerge because 80K is simultaneously a core part of the movement and also an independent organization with its goals and priorities.