80,000 Hours is shifting its strategic approach to focus more on AGI
TL;DR
In a sentence:
We are shifting our strategic focus to put our proactive effort towards helping people work on safely navigating the transition to a world with AGI, while keeping our existing content up.
In more detail:
We think it’s plausible that frontier AI companies will develop AGI by 2030. Given the significant risks involved, and the fairly limited amount of work that’s been done to reduce these risks, 80,000 Hours is adopting a new strategic approach to focus our efforts in this area.
During 2025, we are prioritising:
Deepening our understanding as an organisation of how to improve the chances that the development of AI goes well
Communicating why and how people can contribute to reducing the risks
Connecting our users with impactful roles in this field
And fostering an internal culture which helps us to achieve these goals
We remain focused on impactful careers, and we plan to keep our existing written and audio content accessible to users. However, we are narrowing our focus as we think that most of the very best ways to have impact with one’s career now involve helping make the transition to a world with AGI go well.
This post goes into more detail on why we’ve updated our strategic direction, how we hope to achieve it, what we think the community implications might be, and answers some potential questions.
Why we’re updating our strategic direction
Since 2016, we’ve ranked ‘risks from artificial intelligence’ as our top pressing problem. Whilst we’ve provided research and support on how to work on reducing AI risks since that point (and before!), we’ve put in varying amounts of investment over time and between programmes.
We think we should consolidate our effort and focus because:
We think that AGI by 2030 is plausible — and this is much sooner than most of us would have predicted 5 years ago. This is far from guaranteed, but we think the view is compelling based on analysis of the current flow of inputs into AI development and the speed of recent AI progress. We don’t aim to fully defend this claim here (though we plan to publish more on this topic soon in our upcoming AGI career guide), but the idea that something like AGI will plausibly be developed in the next several years is supported by:
The aggregate forecast of predictions on Metaculus
Analysis of the constraints to AI scaling from Epoch
The views of insiders at top AI companies — see here and here for examples; see additional discussion of these views here
In-depth discussion of the arguments for and against short timelines from Convergence Analysis (written by Zershaaneh, who will be joining our team soon)
We are in a window of opportunity to influence AGI, before laws and norms are set in place.
80k has an opportunity to help more people take advantage of this window. We want our strategy to be responsive to changing events in the world, and we think that prioritising reducing risks from AI is probably the best way to achieve our high-level, cause-impartial goal of doing the most good for others over the long term by helping people have high-impact careers. We expect the landscape to move faster in the coming years, so we’ll need a faster moving culture to keep up.
While many staff at 80k already regarded reducing risks from AI as our most important priority before this strategic update, our new strategic direction will help us coordinate efforts across the org, prioritise between different opportunities, and put in renewed effort to determine how we can best support our users in helping to make AGI go well.
How we hope to achieve it
At a high level, we are aiming to:
Communicate more about the risks of advanced AI and how to mitigate them
Identify key gaps in the AI space where more impactful work is needed
Connect our users with key opportunities to positively contribute to this important work
To keep us accountable to our high level aims, we’ve made a more concrete plan. It’s centred around the following four goals:
Develop deeper views about the biggest risks of advanced AI and how to mitigate them
By increasing the capacity we put into learning and thinking about transformative AI, its evolving risks, and how to help make it go well.
Communicate why and how people can help
Develop and promote resources and information to help people understand the potential impacts of AI and how they can help.
Contribute positively to the ongoing discourse around AI via our podcast and video programme to help people understand key debates and dispel misconceptions.
Connect our users to impactful opportunities for mitigating the risks from advanced AI
By growing our headhunting capacity, doing active outreach to people who seem promising for relevant roles, and driving more attention to impactful roles on our job board.
Foster an internal culture which helps us to achieve these goals
In particular, by moving quickly and efficiently, by increasing automation where possible, and by growing capacity. In particular, increasing our content capacity is a major priority.
Community implications
We think helping the transition to AGI go well is a really big deal — so much so that we think this strategic focusing is likely the right decision for us, even through our cause-impartial lens of aiming to do the most good for others over the long term.
We know that not everyone shares our views on this. Some may disagree with our strategic shift because:
They have different expectations about AI timelines or views on how risky advanced AI might be.
For example, one of our podcast episodes last year explored the question of why people disagree so much about AI risk.
They’re more optimistic about 80,000 Hours’ historical strategy of covering many cause areas rather than this narrower strategic shift, irrespective of their views about AI.
We recognise that prioritising AI risk reduction comes with downsides and that we’re “taking a bet” here that might not end up paying off. But trying to do the most good involves making hard choices about what not to work on and making bets, and we think it is the right thing to do ex ante and in expectation — for 80k and perhaps for other orgs/individuals too.
If you are thinking about whether you should make analogous updates in your individual career or organisation, some things you might want to consider:
Whether how you’re acting lines up with your best-guess timelines
Whether — irrespective of what cause you’re working in — it makes sense to update your strategy to shorten your impact-payoff horizons or update your theory of change to handle the possibility and implications of TAI
Applying to speak to our advisors if you’re weighing up an AI-focused career change
What impact-focused career decisions make sense for you, given your personal situation and fit
While we think that most of the very best ways to have impact with one’s career now come from helping AGI go well, we still don’t think that everyone trying to maximise the impact of their career should be working on AI.
On the other hand, 80k will now be focusing less on broader EA community building and will do little to no investigation into impactful career options in non-AI-related cause areas. This means that these areas will be more neglected, even though we still plan to keep our existing content up. We think there is space for people to create new projects in this space, e.g. an organisation focused on biosecurity and/or nuclear security careers advice outside where they intersect with AI. (Note that we still plan to advise on how to help biosecurity go well in a world of transformative AI, and other intersections of AI and other areas.) We are also glad that there are existing organisations in this space, such as Animal Advocacy Careers and Probably Good, as well as orgs like CEA focusing on EA community building.
Potential questions you might have
What does this mean for non-AI cause areas?
Our existing written and audio content isn’t going to disappear. We plan for it to still be accessible to users, though written content on non-AI topics may not be featured or promoted as prominently in the future. We expect that many users will still get value from our backlog of content, depending on their priorities, skills, and career stage. Our job board will continue listing roles which don’t focus on preventing risks from AI, but will raise its bar for these roles.
But we’ll be hugely raising our bar for producing new content on topics that aren’t relevant for making the transition to AGI go well. The topics we think are relevant here are relatively diverse and expansive, including intersections where AI increases risks in other cause areas, such as biosecurity. When deciding what to work on, we’re asking ourselves “How much does this work help make AI go better?”, rather than “How AI-related is it?”
We’re doing this because we don’t currently have enough content and research capacity to cover AI safety well and want to do that as a first priority. Of course, there are a lot of judgement calls to make in this area: which podcast guests might bring in a sufficiently large audience? What skills and cause-agnostic career advice is sufficiently relevant to making AGI go well? Which updates, like our recent mirror bio updates, are above the bar to make even if they’re not directly related to AI? One decision we’ve already made is going ahead with traditionally publishing our existing career guide, since the content is nearly ready, we have a book deal, and we think that it will increase our reach as well as help people develop an impact mindset about their careers — which is helpful for our new, more narrow goals as well.
We don’t have a precise answer to all of these questions. But as a general rule, it’s probably safe to assume 80k won’t be releasing new articles on topics which don’t relate to making AGI go well for the foreseeable future.
How big a shift is this from 80k’s status quo?
At the most zoomed out level of “What does 80k do?”, this isn’t that big a change — we’re still focusing on helping people to use their careers to have an impact, we’re still taking the actions which we think will help us do the most good for sentient beings from a cause-impartial perspective, and we’re still ranking risks from AI as the top pressing problem.
But we’d like this strategic direction to cause real change at 80k — significantly shifting our priorities and organisational culture to focus more of our attention on helping AGI go well.
The extent to which that’ll cause noticeable changes to each programme’s strategy and delivery depends on the team’s existing prioritisation and how costly dividing their attention between cause areas is. For example:
Advising has already been prioritising speaking to people interested in mitigating risks from AI, whereas the podcast has been covering a variety of topics.
Continuing adding non-AGI jobs to our job board doesn’t significantly trade off with finding new AGI job postings, whereas writing non-AGI articles for our site would need to be done at the expense of writing AGI-focused articles.
Are EA values still important?
Yes!
As mentioned, we’re still using EA values (e.g. those listed here and here) to determine what to prioritise, including in making this strategic shift.
And we still think it’s important for people to use EA values and ideas as they’re thinking about and pursuing high-impact careers. Some particular examples which feel salient to us:
Scope sensitivity and thinking on the margin seem important for having an impact in any area, including helping AGI go well.
We think there are some roles / areas of work where it’s especially important to continually use EA-style ideas and be steadfastly pointed at having a positive impact in order for it to be good to work in the area. For example, in roles where it’s possible to do a large amount of accidental harm, like working at an AI company, or roles where you have a lot of influence in steering an organisation’s direction.
There are also a variety of areas where EA-style thinking about issues like moral patienthood, neglectedness, leverage, etc. are still incredibly useful – e.g. grand challenges humanity may face due to explosive progress from transformatively powerful AI.
We have also appreciated that EA’s focus on collaborativeness and truthseeking has meant that people encouraged us to interrogate whether our previous plans were in line with our beliefs about AI timelines. We also appreciate that it’ll mean that people will continue to challenge our assumptions and ideas, helping us to improve our thinking on this topic and to increase the chance we’ll learn if we’re wrong.
What would cause us to change our approach?
This is now our default strategic direction, and so we’ll have a reasonably high threshold for changing the overall approach.
We care most about having a lot of positive impact, and while this strategic plan is our current guess of how we’ll achieve that, we aim to be prepared to change our minds and plans if the evidence changes.
Concretely, we’re planning to identify the kinds of signs that would cause us to notice this strategic plan was going in the wrong direction in order to react quickly if that happens. For example, we might get new information about the likely trajectory of AI or about our ability to have an impact with our new strategy that could cause us to re-evaluate our plans.
The goals, and actions towards them, mentioned above are specific to 2025, though we intend the strategy to be effective for the foreseeable future. After 2025, we’ll revisit our priorities and see which goals and aims make sense going forward.
I’m not sure exactly what this change will look like, but my current impression from this post leaves me disappointed. I say this as someone who now works on AI full-time and is mostly persuaded of strong longtermism. I think there’s enough reason for uncertainty about the top cause and value in a broad community that central EA organizations should not go all-in on a single cause. This seems especially the case for 80,000 Hours, which brings people in by appealing to a general interest in doing good.
Some reasons for thinking cause diversification by the community/central orgs is good:
From an altruistic cause prioritization perspective, existential risk seems to require longtermism, including potentially fanatical views (see Christian Tarsney, Rethink Priorities). It seems like we should give some weight to causes that are non-fanatical.
Existential risk is not most self-identified EAs’ top cause, and about 30% of self-identified EAs say they would not have gotten involved if it did not focus on their top cause (EA survey). So it does seem like you miss an audience here.
Organizations like 80,000 Hours set the tone for the community, and I think there’s good rule-of-thumb reasons to think focusing on one issue is a mistake. As 80K’s problem profile on factory farming says, factory farming may be the greatest moral mistake humanity is currently making, and it’s good to put some weight on rules of thumb in addition to expectations.
Timelines have shortened, but it doesn’t seem obvious whether the case for AGI being an existential risk has gotten stronger or weaker. There are signs of both progress and setbacks, and evidence of shorter timelines but potentially slower takeoff.
I’m also a bit confused because 80K seemed to recently re-elevate some non-existential risk causes on its problem profiles (great power war and factory farming; many more under emerging challenges). This seemed like the right call and part of a broader shift away from going all-in on longtermism in the FTX era. I think that was a good move and that keeping an EA community that is not only AGI is valuable.
Hey Zach,
(Responding as an 80k team member, though I’m quite new)
I appreciate this take; I was until recently working at CEA, and was in a lot of ways very very glad that Zach Robinson was all in on general EA. It remains the case (as I see it) that, from a strategic and moral point of view, there’s a ton of value in EA in general. It says what’s true in a clear and inspiring way, a lot of people are looking for a worldview that makes sense, and there’s still a lot we don’t know about the future. (And, as you say, non-fanaticism and pluralistic elements have a lot to offer, and there are some lessons to be learned about this from the FTX era)
At the same time, when I look around the EA community, I want to see a set of institutions, organizations, funders and people that are live players, responding to the world as they see it, making sure they aren’t missing the biggest thing currently happening. (or, if like 80k they are an org where one of its main jobs is communicating important things, letting their audiences miss it.) Most importantly, I want people to act on their beliefs (with appropriate incorporation of heuristics, rules of thumb, outside views, etc). And to the extent that 80k staff and leadership’s beliefs changed with the new evidence, I’m excited for them to be acting on it.
I wasn’t involved in this strategic pivot, but when I was considering the job, I was excited to see a certain kind of leaping to action in the organization as I was considering whether to join.
It could definitely be a mistake even within this framework (by causing 80k to not appeal parts of its potential audience) or empirically (on size of AI risk, or sizes of other problems) or long term (because of the damage it does to the EA community or intellectual lifeblood / eating the seed corn). In the past I’ve worried that various parts of the community were jumping too fast into what’s shiny and new, but 80k has been talking about this for more than a year, which is reassuring.
I think the 80k leadership have thoughts about all of these, but I agree that this blog post alone doesn’t fully make the case.
I think the right answer to these uncertainties is some combination of digging in and arguing about them (as you’ve started here — maybe there’s a longer conversation to be had), or waiting and see how these bets turn out.
Anyway, I appreciate considerations like the ones you’ve laid out because I think they’ll help 80k figure out if it’s making a mistake (now or in the future), even though I’m currently really energized and excited by the strategic pivot.
Thanks @ChanaMessinger I appreciate this comment, and think that your kind of tone here is healthier than the original announcement. Your well written one sentence captures many of the important issues well.
”It could definitely be a mistake even within this framework (by causing 80k to not appeal parts of its potential audience) or empirically (on size of AI risk, or sizes of other problems) or long term (because of the damage it does to the EA community or intellectual lifeblood / eating the seed corn).”
FWIW I think a clear mistake is the poor communication here. That the most obvious and serious potential community impacts have been missed and the tone is poor. If this had been presented in a way that it looked like the most serious potential downsides were considered, I would both feel better about it and be more confident that 80k has done a deep SWAT analysis here rather than the really basic framing of the post which is more like...
“AI risk is really bad and urgent let’s go all in”
This makes the decision seem not only insensitive but also poorly thought through which in sure is not the case. I imagine the chief concerns of the commenters were discussed at the highest level.
I’m assuming there are comms people at 80k and it surprises me that this would slip through like this.
Thanks for the feedback here. I mostly want to just echo Niel’s reply, which basically says what I would have wanted to. But I also want to add for transparency/accountability’s sake that I reviewed this post before we published it with the aim of helping it communicate the shift well – I focused mostly on helping it communicate clearly and succinctly, which I do think is really important, but I think your feedback makes sense, and I wish that I’d also done more to help it demonstrate the thought we’ve put into the tradeoffs involved and awareness of the costs. For what it’s worth, & we don’t have dedicated comms staff at 80k—helping with comms is currently part of my role, which is to lead our web programme.
No it doesn’t! Scott Alexander has a great post about how existential risk issues are actually perfectly well motivated without appealing to longtermism at all.
Caring about existential risk does not require longtermism, but existential risk being the top EA priority probably requires longtermism or something like it. Factory farming interventions look much more cost-effective in the near term than x-risk interventions, and GiveWell top charities look probably more cost-effective.
I’m not sure if GiveWell top charities do? Preventing extinction is a lot of QALYs, and it might not cost more than a few $B per year of extra time bought in terms of funding Pause efforts (~$1/QALY!?)
By my read, that post and the excerpt from it are about the rhetorical motivation for existential risk rather than the impartial ethical motivation. I basically agree that longtermism is not the right framing in most conversations, and it’s also not necessary for thinking existential risk work would be more valuable than the marginal public dollar.
I included the qualifier “From an altruistic cause prioritization perspective” because I think that from an impartial cause prioritization perspective, the case is different. If you’re comparing existential risk to animal welfare and global health, the links in my comment I think make the case pretty persuasively that you need longtermism.
It’s not “longtermist” or “fanatical” at all (or even altruistic) to try and prevent yourself and everyone else on the planet (humans and animals) being killed in the near future by uncontrollable ASI[1] (quite possibly in a horrible, painful[2], way[3]).
Indeed, there are many non-EAs who care a great deal about this issue now.
I mention this as it’s a welfarist consideration, even if one doesn’t care about death in and of itself.
Ripped apart by self-replicating computronium-building nanobots, anyone?
Strongly endorsing Greg Colbourn’s reply here.
When ordinary folks think seriously about AGI risks, they don’t need any consequentialism, or utilitarianism, or EA thinking, or the Sequences, or long-termism, or anything fancy like that.
They simply come to understand that AGI could kill all of their kids, and everyone they ever loved, and could ruin everything they and their ancestors ever tried to achieve.
I’m not that surprised that the above comment has been downvoted to −4 without any replies (and this one will probably buried by an even bigger avalanche of downvotes!), but it still makes me sad. EA will be ivory-tower-ing until the bitter end it seems. It’s a form of avoidance. These things aren’t nice to think about. But it’s close now, so it’s reasonable for it to feel viscerally real. I guess it won’t be EA that saves us (from the mess it helped accelerate), if we do end up saved.
The comment you replied to
acknowledges the value of x-risk reduction in general from a non-longtermist perspective
clarifies that it is making a point about the marginal altruistic value of x-risk vs AW or GHW work and points to a post making this argument in more detail
Your response merely reiterates that x-risk prevention has substantial altruistic (and non-altruistic) value. This isn’t responsive to the claim about whether, under non-longtermist assumptions, that value is greater on the margin than AW or GHW work.
So even though I actually agree with the claims in your comment, I downvoted it (along with this one complaining about the downvotes) for being off-topic and not embodying the type of discourse I think the EA Forum should strive for.
Thanks for the explanation.
Whilst zdgroff’s comment “acknowledges the value of x-risk reduction in general from a non-longtermist perspective” it downplays it quite heavily imo (and the OP comment does even more, using the pejorative “fanatical”).
I don’t think the linked post makes the point very persuasively. Looking at the table, at best there is an equivalence.
I think a rough estimate of the cost effectiveness of pushing for a Pause is orders of magnitude higher.
You don’t need EAs Greg—you’ve got the general public!
Adding a bit more to my other comment:
For what it’s worth, I think it makes sense to see this as something of a continuation of a previous trend – 80k has for a long time prioritised existential risks more than the EA community as a whole. This has influenced EA (in my view, in a good way), and at the same time EA as a whole has continued to support work on other issues. My best guess is that that is good (though I’m not totally sure—EA as a whole mobilising to help things go better with AI also sounds like it could be really positively impactful).
I think that existential risks from various issues with AGI (especially if one includes trajectory changes) are high enough that one needn’t accept fanatical views to prioritise them (though it may require caring some about potential future beings). (We have a bit on this here)
I agree this means we will miss out on an audience we could have if we fronted content on more causes. We hope to also appeal to new audiences with this shift, such as older people who are less naturally drawn to our previous messaging, and e.g. who are more motivated by urgency. However, it seems plausible this shrinks our audience. This seems worth it because we think in doing so we’ll be telling people what we think about how urgent and pressing AI risks seem to us, and that this could still lead us to having more impact overall since impact varies so much between careers, in part based on what causes people focus on.
I think the argument you linked to is reasonable. I disagree, but not strongly. But I think it’s plausible enough that AGI concerns (from an impartial cause prioritization perspective) require fanaticism that there should still be significant worry about it. My take would be that this worry means an initially general EA org should not overwhelmingly prioritize AGI.
Hey Zach. I’m about to get on a plane so won’t have time to write a full response, sorry! But wanted to say a few quick things before I do.
Agree that it’s not certain or obvious that AI risk is the most pressing issue (though it is 80k’s best guess & my personal best guess, and I don’t personally have the view that it requires fanaticism.) And I also hope the EA community continues to be a place where people work on a variety of issues—wherever they think they can have the biggest positive impact.
However, our top commitment at 80k is to do our best to help people find careers that will allow them to have as much positive impact as they can. & We think that to do that, more people should strongly consider and/or try out working on reducing the variety of risks that we think transformative AI poses. So we want to do much more to tell them that!
In particular, from a web specific perspective, I feel that the website doesn’t feel consistent right now with the possibility of short AI timelines & the possibility that AI might not only pose risks from catastrophic misalignment, but also other risks, plus that it will probably affect many other cause areas. Given the size of our team, I think we need to focus our new content capacity on changing that.
I think this post I wrote a while ago might also be relevant here!
https://forum.effectivealtruism.org/posts/iCDcJdqqmBa9QrEHv/faq-on-the-relationship-between-80-000-hours-and-the
Will circle back more tomorrow / when I’m off the flight!
Yeah, FWIW, it’s mine too. Time will tell how I feel about the change in the end. That EA Forum post on the 80K-EA community relationship feels very appropriate to me, so I think my disagreement is about the application.
Zach wrote this last year in his first substantive post as CEO of CEA, announcing that CEA will continue to take a “principles-first” approach to EA. (I’m Zach’s Chief of Staff.) Our approach remains the same today: we’re as motivated as ever about stewarding the EA community and ensuring that together we live up to our full potential.
Collectively living up to our full potential ultimately requires making a direct impact. Even under our principles-first approach, impact is our north star, and we exist to serve the world, not the EA community itself. But Zach and I continue to believe there is no other set of principles that has the same transformative potential to address the world’s most pressing problems as EA principles. So, in our assessment, at this moment in time, the best way for CEA to make progress towards our ultimate goal is sustainably growing the number of people putting EA principles into practice.
In reaching and implementing their decision to shift their strategic approach, Niel and others at 80k are putting those principles into practice. While we might disagree about some of the particulars, or draw different conclusions, we don’t disagree that updating in response to new information is appropriate, that AI risk reduction is a critically important cause, or that achieving progress at the scale and speed required will require making some hard trade-offs. We agree that there will be implications and opportunities for our community, including for CEA, in terms of filling some of the gaps 80k might leave behind, and this transition will be made smoother by the fact we are all still shooting for the same north star.
I want to recognize that these are big shoes to fill: 80k has built an incredibly impressive team, developed a set of remarkable products, and earned great respect from a wide audience. I’m both sad that this unique combination won’t be deployed so directly in stewardship of EA, and excited to see what it can achieve with even greater focus.
To the extent that this post helps me understand what 80,000 Hours will look like in six months or a year, I feel pretty convinced that the new direction is valuable—and I’m even excited about it. But I’m also deeply saddened that 80,000 Hours as I understood it five years ago—or even just yesterday—will no longer exist. I believe that organization should exist and be well-resourced, too.
Like others have noted, I would have much preferred to see this AGI-focused iteration launched as a spinout or sister organization, while preserving even a lean version of the original, big-tent strategy under the 80K banner, and not just through old content remaining online. A multi-cause career advising platform with thirteen years of refinement, SEO authority, community trust, and brand recognition is not something the EA ecosystem can easily replicate. Its exit from the meta EA space leaves a huge gap that newer and smaller projects simply can’t fill in the short term.
I worry that this shift weakens the broader ecosystem, making it harder for promising people to find their path into non-AI cause areas—some of which may be essential to navigating a post-AGI world. Even from within an AGI-focused lens, it’s not obvious that deprioritizing other critical problems is a winning long-term bet.
If transformative AI is just five years away, then we need people who have spent their careers reducing nuclear risks to be doing their most effective work right now—even if they’re not fully bought into AGI timelines. We need biosecurity experts building robust systems to mitigate accidental or deliberate pandemics—whether or not they view that work as directly linked to AI. And if we are truly on the brink of catastrophe, we still need people focused on minimizing human and nonhuman suffering in the time we have left. That’s what made 80K so special: it could meet people where they were, offer intellectually honest cause prioritization, and help them find a high-impact path even if they weren’t ready to work on one specific worldview.
I have no doubt the 80K team approached this change with thoughtfulness and passion for doing the most good. But I hope they’ll reconsider preserving 80K as 80K—a broadly accessible, big ten hub—and launching this new AGI-centered initiative under a distinct name. That way, we could get the best of both worlds: a strong, focused push on helping people work on safely navigating the transition to a world with AGI, without losing one of the EA community’s most trusted entry points.
Hey Rocky —
Thanks for sharing these concerns. These are really hard decisions we face, and I think you’re pointing to some really tricky trade-offs.
We’ve definitely grappled with the question of whether it would make sense to spin up a separate website that focused more on AI. It’s possible that could still be a direction we take at some point.
But the key decision we’re facing is what to do with our existing resources — our staff time, the website we’ve built up, our other programmes and connections. And we’ve been struggling with the fact that the website doesn’t really fully reflect the urgency we believe is warranted around rapidly advancing AI. Whether we launch another site or not, we want to honestly communicate about how we’re thinking about the top problem in the world and how it will affect people’s careers. To do that, we need to make a lot of updates in the direction this post is discussing.
That said, I’ve always really valued the fact that 80k can be useful to people who don’t agree with all our views. If you’re sceptical about AI having a big impact in the next few decades, our content on pandemics, nuclear weapons, factory farming — or our general career advice — can still be really useful. I think that will remain true even with our strategy shift.
I also think this is a really important point:
I think we’re mostly in agreement here — work on nuclear risks and biorisks remain really important, and last year we made efforts to make sure our bio and nuclear content was more up to date. We recently made an update about mirror bio risks, because they seem especially pressing.
As the post above says: “When deciding what to work on, we’re asking ourselves ‘How much does this work help make AI go better?’, rather than ‘How AI-related is it?’” So to the extent that other work has a key role to play in the risks that surround a world with rapidly advancing AI, it’s clearly in scope of the new strategy.
But I think it probably is helpful for people doing work in areas like nuclear safety and bio to recognise the way short AI timelines could affect their work. So if 80k can communicate that to our audience more clearly, and help people figure out what that means they should do for their careers, it could be really valuable.
I do think we should be absolutely clear that we agree with this — it’s incredibly valuable that work to minimise existing suffering continues. I support that happening and am incredibly thankful to those who do it. This strategy doesn’t change that a bit. It just means 80k thinks our next marginal efforts are best focused on the risks arising from AI.
On the broader issue of what this means for the rest of the EA ecosystem, I think the risks you describe are real and are important to weigh. One reason we wanted to communicate this strategy publicly is so others could assess it for themselves and better coordinate on their paths forward. And as Conor said, we really wish we didn’t have to live in a world where these issues seem as urgent as they do.
But I think I see the costs of the shift as less stark. We still plan to have our career guide up as a central piece of content, which has been a valuable resource to many people; it explains our views on AI, but also guides people through thinking about cause prioritisation for themselves. And as the post notes, we plan to publish and promote a version of the career guide with a professional publisher in the near future. At the same time, for many years 80k has also made it clear that we prioritise risks from AI as the world’s most pressing problem. So I don’t think I see this as clearly a break from the past as you might.
At the highest level, though, we do face a decision about whether to focus more on AI and the plausibly short timelines to AGI, or to spend time on a wider range of problem areas and take less of a stance on timelines. Focusing more does have the risk that we won’t reach our traditional audience as well, which might even reduce our impact on AI; but declining to focus more has the risk of missing out on other audiences we previously haven’t reached, failing to faithfully communicate our views about the world, and missing out on big opportunities to positively work on what we think is the most pressing problem we face.
As the post notes, while we are committed to making the strategic shift, we’re open to changing our minds if we get important updates about our work. We’ll assess how we’re performing on the new strategy, whether there are any unexpected downsides, and whether developments in the world are matching our expectations. And we definitely continue to be open to feedback from you and others who have a different perspective on the effects 80k is having in the world, and we welcome input about what we can do better.
Minor point, but I’ve seen big tent EA as referring to applying effectiveness techniques on any charity. Then maybe broad current EA causes could be called the middle-sized tent. Then just GCR/longtermism could be called the small tent (which 80k already largely pivoted to years ago, at least considering their impact multipliers). Then just AI could be the very small tent.
(Tangent: “big tent EA” originally referred to encouraging a broad set of views among EAs while ensuring EA is presented as a question, but semantic drift I suppose...)
I was referring to this earlier academic article. I’ve also heard of discussion along a similar vein in the early days of EA.
Thanks! I wasn’t sure the best terminology to use because I would never have described 80K as “cause agnostic” or “cause impartial” and “big tent” or “multi-cause” felt like the closest gesture to what they’ve been.
I think this is going to be hard for university organizers (as an organizer at UChicago EA).
At the end of our fellowship, we always ask the participants to take some time to sign up for 1-1 career advice with 80k, and this past quarter myself and other organizers agreed that we felt somewhat uncomfortable doing this given that we knew that 80k was leaning a lot on AI—as we presented it as merely being very good for getting advice on all types of EA careers. This shift will probably make it so that we stop sending intro fellows to 80k for advice, and we will have to start outsourcing professional career advising to somewhere else (not sure where this will be yet).
Given this, I wanted to know if 80k (or anyone else) has any recommendations on what EA University Organizers in a similar position should do (aside from the linked resources like Probably Good).
Another place people could be directed for career advice: https://probablygood.org/
Since last semester, we have made career 1-on-1s a mandatory part of our introductory program.
This semester, we will have two 1-on-1s
The first one will be a casual conversation where the mentee-mentor get to learn more about each other
The second one will be more in-depth, where we will share this 1-on-1 sheet (shamelessly poached from the 80K), the mentees will fill it out before the meeting, have a ≤1 hour long conversation with a mentor of their choice, and post-meeting, the mentor will add further resources to the sheet that may be helpful.
The advice we give during these sessions ends up being broader than just the top EA ones, although we are most helpful in cases where:
— someone is curious about EA/adjacent causes
— someone needs graduate school related questions
— general “how to best navigate college, plan for internships, etc” advice
Do y’all have something similar set up?
As a (now ex-) UChicago organizer and current Organizer Support Program mentor (though this is all in my personal capacity), I share Noah’s concerns here.
I see how reasonable actors in 80k’s shoes could come to the conclusions they came to, but I think this is a net loss for university groups, which disappoints me — I think university groups are some of the best grounds we have to motivate talented young people to devote their careers to improving the world, and I think the best way to do this is by staying principles-first and building a community around the core ideas of scope sensitivity, scout mindset, impartiality, and recognition of tradeoffs.
I know 80k isn’t disavowing these principles, but the pivot does mean 80k is de-emphasizing them.
All this makes me think that 80k will be much less useful to university groups, because it
a) makes it much tougher for us to recommend 80k to interested intro fellows (personalized advising, even if it’s infrequently granted, is a powerful carrot, and the exercises you have to complete to finish the advising are also very useful), and b) means that university groups will have to find a new advising source for their fresh members who haven’t picked a cause-area yet.
I hear this; I don’t know if this is too convenient or something, but, given that you were already concerned at the prioritization 80K was putting on AI (and I don’t at all think you’re alone there), I hope there’s something more straightforward and clear about the situation as it lies now where people can opt-in or out of this particular prioritization or hearing the case for it.
Appreciate your work as a university organizer—thanks for the time and effort you dedicate to this (and also hello from a fellow UChicagoan, though many years ago).
Sorry I don’t have much in the way of other recommendations; I hope others will post them.
Even though we might have been concerned about the prioritisation, it still made sense to refer to 80k because it still at least gave the impression of openness to a range of causes.
Now even if the good initial advice remains, all roads lead to AI so it feels like a bit of a bait and switch to send someone there when the advice can only lead one way from 80ks perspective.
Yes it’s more “straightforward” and clear, but it’s also a big clear gap now on the trusted, well known non-AI career advice front. Uni groups will struggle a bit but hopefully the career advice marketplace continues to improve
Huh, I think this way is a substantial improvement—if 80K had strong views about where their advice leads, far better to be honest about this and let people make informed decisions, than giving the mere appearance of openness
From the update, it seems that:
80K’s career guide will remain unchanged
I especially feel good about this, because the guide does a really good job of emphasizing the many approaches of pursuing an impactful career
n = 1 anecdotal point: during tabling early this semester, a passerby mentioned that they knew about 80K because a professor had prescribed one of the readings from the career guide in their course. The professor in question and the class they were teaching had no connection with EA, AI Safety, or our local EA group.
If non-EAs also find 80K’s career guide useful, that is a strong signal that it is well-written, practical, and not biased to any particular cause
I expect and hope that this remains unchanged, because we prescribe most of the career readings from that guide in our introductory program
Existing write-ups on non-AI problem profiles will also remain unchanged
There will be a separate AGI career guide
But the job board will be more AI focused
Overall, this tells me that groups should still feel comfortable sharing readings from the career guide and on other problem profiles, but selectively recommend the job board primarily to those interested in “making AI go well” or mid/senior non-AI people. Probably Good has compiled a list of impact-focused job boards here, so this resource could be highlighted more often.
That’s interesting and would be nice if it was the case. That wasn’t the vibe I got from the announcement but we will see.
Thanks for raising this Noah.
In addition to the ideas raised above, some other thoughts:
Giving fellowship members a menu of career-coaching options they could apply to (like the trifecta Conor mentions here, who all offer career advising)
Consider encouraging people to sign up to community and networking events, like EAG/x’s
Directing folks to 80k resources with more caveats about which places you think we might be helpful for your group, and what things we might be overlooking
We think that lots of our resources like our career guide and career planning template should still be useful irrespective of cause prioritisation, and caveating might help allay your worries about misconstruing our focus.
We also hope that our explicit focusing on AGI can help our own site / resources be more clear and transparent about our views on what’s most pressing.
Thanks for sharing this update. I appreciate the transparency and your engagement with the broader community!
I have a few questions about this strategic pivot:
On organizational structure: Did you consider alternative models that would preserve 80,000 Hours’ established reputation as a more “neutral” career advisor while pursuing this AI-focused direction? For example, creating a separate brand or group dedicated to AI careers while maintaining the broader 80K platform for other cause areas? This might help avoid potential confusion where users encounter both your legacy content presenting multiple cause areas and your new AI-centric approach.
On the EA pathway: I’m curious about how this shift might affect the “EA funnel”—where people typically enter effective altruism through more intuitive cause areas like global health or animal welfare before gradually engaging with longtermist ideas like AI safety. By positioning 80,000 Hours primarily as an AI-focused organization, are you concerned this might make it harder for newcomers to find their way into the community if AI risk arguments initially seem abstract or speculative to them?
On reputational considerations: Have you weighed the potential reputational risks if AI development follows a more moderate trajectory than anticipated? If we see AI plateau at impressive but clearly non-transformative capabilities, this strategic all-in approach could affect 80,000 Hours’ credibility for years to come. The past decade of 80K’s work as a cause-diverse advisor has created tremendous value—might a spinoff organization for AI-specific work better preserve that accumulated trust while still allowing you to pursue what you see as the highest-impact path?
Hi Håkon, Arden from 80k here.
Great questions.
On org structure:
One question for us is whether we want to create a separate website (“10,000 Hours?”), that we cross-promote from the 80k website, or to change the 80k website a bunch to front the new AI content. That’s something we’re still thinking about, though I am currently weakly leaning toward the latter (more on why below). But we’re not currently thinking about making an entire new organisation.
Why not?
For one thing, it’d be a lot of work and time, and we feel this shift is urgent.
Primarily, though, 80,000 Hours is a cause-impartial organisation, and we think that means prioritising the issues we think are most pressing (& telling our audience about why we think that.)
What would be the reason for keeping one 80k site instead of making a 2nd separate one?
As I wrote to Zach above, I think the site currently doesn’t represent the possibility of short timelines or the variety of risks AI poses well, even though it claims to be telling people key information they need to know to have a high impact career. I think that’s key information, so want it to be included very prominently.
As a commenter noted below, it’d take time and work to build up an audience for the new site.
But I’m not sure! As you say, there are reasons to make a separate site as well.
On EA pathways: I think Chana covered this well – it’s possible this will shrink the number of people getting into EA ways of thinking, but it’s not obvious. AI risk doesn’t feel so abstract anymore.
On reputation: this is a worry. We do plan to express uncertainty about whether AGI will indeed progress as quickly as we worry it will, and that if people pursue a route to impact that depends on fast AI timelines, that’s making a bet that might not pay off. However, we think it’s important both for us & for our audience to act under uncertainty, using rules of thumb but also thinking about expected impact.
In other words – yes, our reputation might suffer from this if AI progresses slowly. If that happens, it will probably be worse for our impact, but better for the world, and I think I’ll still feel good about expressing our (uncertain) views on this matter when we had them.
I feel like this argument has been implicitly holding back a lot of EA focus on AI (for better or worse), so thanks for putting it so clearly. I always wonder about the asymmetry of it: what about the reputational benefits that accrue to 80K/EA for correctly calling the biggest cause ever? (If they’re correct)
I think others at 80k are best placed to answer this (for time zone reasons I’m most active in this thread right now), but for what it’s worth, I’m worried about the loss at the top of the EA funnel! I think it’s worth it overall, but I think this is definitely a hit.
That said, I’m not sure AI risk has to be abstract or speculative! AI is everywhere, I think feels very real to some people, can feel realer than others, and the problems we’re encountering are rapidly less speculative (we have papers showing at least some amount of alignment faking, scheming, obfuscation of chain of thought, reward hacking, all that stuff!)
One question I have is how much it will be the case in the future that people looking for a general “doing good” framework will in fact bounce off of the new 80k. For instance, it could be the case that AI is so ubiquitous that it would feel totally out of touch to not be discussing it a lot. More compellingly to me, I think it’s 80k’s job to make the connection; doing good in the current world requires taking AI and its capabilities and risks seriously. We are in an age of AI, and that has implications for all possible routes to doing good.
I like your take on reputation considerations; I think lots of us will definitely have to eat non-zero crow if things really plateau, but I think the evidence is strong enough to care deeply about this and prioritize it, and I don’t want to obscure that we believe that for the reputational benefit.
From a practical point of view, if all the traffic and search/other reputation is to 80k website, and the timelines are perceived to be short, I could imagine it makes sense to the team to directly adjust the focus of the website rather than take the years to build up a separate, additional brand.
Makes sense. Just want to flag that tensions like these emerge because 80K is simultaneously a core part of the movement and also an independent organization with its goals and priorities.
I’m a little sad and confused about this.
First I think it’s a bit insensitive that a huge leading org like this would write such a significant post with almost no recognition that this decision is likely to hurt and alienate some people. It’s unfortunate that the post is written in a warm and upbeat tone yet is largely bereft of emotional intelligence and recognition of potential harms of this decision. I’m sure this is unintentional but it still feels tone deaf. Why not acknowledge the potential emotional and community significance of this decision, and be a bit more humble in general? Something like...
“We realise this decision could be seen as sidelining the importance of many people’s work and could hurt or confuse some people. We encourage you to keep working on what you believe is most important and we realize even after much painstaking thought we’re still quite likely to be wrong here.′
I also struggle to understand how this is the best strategy as an onramp for people to EA—assuming that is still part of the purpose of 80k. Yes there are other orgs which do career advising and direction, but that are still minnows compared with you. Even if you’re sole goal is to get as many people into AI work as possible, I think you coud well achieve that better through helping people understand worldview diversification and helping them make up their own mind, while keeping of course a heavy focus on AI safety and clearly having that as your no 1 cause.
It could also feel like kick in the teeth to the huge numbers of people who are committed to EA principles, working in animal welfare and global health and who are skeptical about the value AI safety work for a range of reasons whether its EAs sketchy record to date, tractability or just very different AGI timelines. Again a bit more humility might have have softened the blow here.
Why not just keep AI Safety as your main cause area while still having some diversification at least? I get that you’re making a bet, but I think it’s an unnecessary one, both for the togetherness and growth of the EA community in general and possibly even if your sole metric is attracting more good people to work on making the AI trajectory better
You also put many of us in a potentially awkward position of disagreeing with the position one of the top 3 or so EA orgs, a position I haven’t been in before. If anyone asked me a week ago what I thought of 80,000 hours, I would say something like, “they’re a great organization who helps you think about how to do the most good possible with your life. Personally I think they have a bit too much focus on AI risk but they are an incredible resource for anyone thinking about what to do with their future so check them out”
Now I’m not sure what I’ll say but it’s hard not to be honest and say I disagree with 80ks sole focus on AI and point people somewhere else, which doesn’t feel great for the “big EA tent” or bolstering “EA as an idea”
Despite all this yes you might be right that sidelining many people and their work and risking splintering the community on some level might be worth it for the good of AI safety, but boy is that some bet to make.
I’m really sorry this post made you sad and confused. I think that’s an understandable reaction, and I wish I had done more to mitigate the hurt this update could cause. As someone who came into EA via global health, I personally very much value the work that you and others are doing on causes such as global development and factory farming.
A couple comments on other parts of your post in case it’s helpful:
Our purpose is not to get people into EA, but to help solve the world’s most pressing problems. I think the EA community and EA values are still a big part of that. (Arden has written more on 80k’s relationship to the EA community here.) But I also think the world has changed a lot and will change even more in the near future, and it would be surprising if 80k’s best path to impact didn’t change as well. I think focusing our ongoing efforts more on making the development of AGI go well is our best path to impact, building on what 80k has created over time.
But I might be wrong about this, and I think it’s reasonable that others disagree.
I don’t expect the whole EA community to take the same approach. CEA has said it wants to take a “principles-first approach”, rather than focusing more on AI as we will (though to be clear, our focus is driven by our principles, and we want to still communicate that clearly).
I think open communication about what different orgs are prioritising and why is really vital for coordination and to avoid single-player thinking. My hope is that people in the EA community can do this without making others with different cause prio feel bad about their disagreements or differences in strategy. I certainly don’t want anyone doing work in global health or animal welfare to feel bad about their work because of our conclusions about where our efforts are best focused — I am incredibly grateful for the work they do.
Unfortunately I think that all the options in this space involve taking bets in an important way. We also think that it’s costly if users come to our site and don’t quickly understand that we think the current AI situation deserves societal urgency.
On the other costs that you mention in your post, I think I see them as less stark than you do. Quoting Cody’s response to Rocky above:
> We still plan to have our career guide up as a key piece of content, which has been a valuable resource to many people; it explains our views on AI, but also guides people through thinking about cause prioritisation for themselves. And as the post notes, we plan to publish and promote a version of the career guide with a professional publisher in the near future. At the same time, for many years 80k has also made it clear that we prioritise risks from AI as the world’s most pressing problem. So I don’t think I see this as clearly a break from the past as you might.
I also want to thank you for sharing your concerns, which I realise can be hard to do. But it’s really helpful for us to know how people are honestly reacting to what we do.
Thanks for the thoughtful reply really appreciate this. To have the CEO of an org replying to comments is refreshing and I actually think an excellent use of a few hours of time.
”I certainly don’t want anyone doing work in global health or animal welfare to feel bad about their work because of our conclusions about where our efforts are best focused — I am incredibly grateful for the work they do”. This is fantastic to hear and makes a big difference to hear, thanks for this.
“Our purpose is not to get people into EA, but to help solve the world’s most pressing problems.”—This might be your purpose, but the reality is that 80,000 hours plays an enormous role in getting peopl einto EA.
Losing some (or a lot) of this impact could have been recognised as a potential large (perhaps the largest) tradeoff with the new direction. What probably hit me most about the announcement was the seeming lack of recognition of the potentially most important tradeoffs—it makes it seem like the tradeoffs haven’t been considered when I’m sure they have.
You’re right that we make bets whatever we do or don’t do.
Thanks again for the reply!
Sorry to hear you found this saddening and confusing :/
Just to share another perspective: To me, the post did not come across as insensitive. I found the tone clear and sober, as I’m used to from 80k content, and I appreciated the explicit mention that there might now be space for another org to cover other cause areas like bio or nuclear.
These trade-offs are always difficult, but as any EA org, 80k should do what they consider highest expected impact overall rather than what’s best for the EA community, and I’m glad they’re doing that.
What confused/saddened me wasn’t so much their reasons the change, but why they didn’t address perhaps the 3-5 biggest potential objections / downsides / trade offs to the decision. They even had a section “What does this mean for non-AI cause areas?” without stating the most important things that this means for non-AI cause areas, which include
1. Members the current community feeling left out/frustrated because for the first time they are no longer aligned with / no longer served by a top EA organisation
2. (From ZDGroff) “Organizations like 80,000 Hours set the tone for the community, and I think there’s good rule-of-thumb reasons to think focusing on one issue is a mistake. As 80K’s problem profile on factory farming says, factory farming may be the greatest moral mistake humanity is currently making, and it’s good to put some weight on rules of thumb in addition to expectations.”
3. The risk of narrowing the funnel into EA as less people will be attracted to a narrower AI focus (mentioned a few times). This seems like a pretty serious issue to not address, given that 80k (like it or not) is an EA front page
Just because 80k doesn’t necessarily have these issues as their top goal, doesnt’ mean these issues don’t exist. I sense a bit of “Ostrich” mindset. I’ve heard a couple of times that they aren’t aiming to be an onramp to EA, but that doesn’t stop them from being one of the main Onramps evidenced by studies that have asked people how they got into EA....
I think the tone of the post is somewhat tone deaf and could easily have been mitigated with some simple soft and caring language, such as “we realise that some people may feel...”, and “This could make it harder for....”. Maybe that’s not the tone 80k normally take, but I think that’s a nicer way to operate which costs you basically nothing.
Morally, I am impressed that you are doing an in many ways socially awkward and uncomfortable thing because you think it is right.
BUT
I strongly object to you citing the Metaculus AGI question as significant evidence of AGI by 2030. I do not think that when people forecast that question, they are necessarily forecasting when AGI, as commonly understood or in the sense that’s directly relevant to X-risk will arrive. Yes the title of the question mentions AGI. But if you look at the resolution criteria, all an AI model has to in order to resolve the question ‘yes’ is pass a couple of benchmarks involving coding and general knowledge, put together a complicated model car, and imitate. None of that constitutes being AGI in the sense of “can replace any human knowledge worker in any job”. For one thing, it doesn’t involve any task that is carried out over a time span of days or weeks, but we know that memory and coherence over long time scales is something current models seem to be relatively bad at, compared to passing exam-style benchmarks. It also doesn’t include any component that tests the ability of models to learn new tasks at human-like speed, which again, seems to be an issue with current models. Now, maybe despite all this, it’s actually the case that any model that can pass the benchmark will in fact be AGI in the sense of “can permanently replace almost any human knowledge worker”, or at least will obviously only be a 1-2 years of normal research progress away from that. But that is a highly substantive assumption in my view.
I know this is only one piece of evidence you cite, and maybe it isn’t actually a significant driver of your timelines, but I still think it should have been left out.
Thanks David. I agree that the Metaculus question is a mediocre proxy for AGI, for the reasons you say. We included it primarily because it shows the magnitude of the AI timelines update that we and others have made over the past few years.
In case it’s helpful context, here are two footnotes that I included in the strategy document that this post is based on, but that we cut for brevity in this EA Forum version:
This Deepmind definition of AGI is the one that we primarily use internally. I think that we may get strategically significant AI capabilities before this though, for example via automated AI R&D.
On the Metaculus definition, I included this footnote:
Thanks, that is reassuring.
Curious if you have better suggestions for forecasts to use, especially for communicating to a wider audience that’s new to AI safety.
I haven’t read it, but Zershaaneh Qureshi at Convergence Analysis wrote a recent report on pathways to short timelines.
I don’t know of anything better right now.
I’ve been very concerned that EA orgs, particularly the bigger ones, would be too slow to orient and react to changes in the urgency of AI risk, so I’m very happy that 80k is making this shift in focus.
Any change this size means a lot of work in restructuring teams, their priorities and what staff is working on, but I think this move ultimately plays to 80k’s strengths. Props.
I want to extend my sympathies to friends and organisations who feel left behind by 80k’s pivot in strategy. I’ve talked to lots of people about this change in order to figure out the best way for the job board to fit into this. In one of these talks, a friend put it in a way that captures my own feelings: I hate that this is the timeline we’re in.
I’m very glad 80,000 Hours is making this change. I’m not glad that we’ve entered the world where this change feels necessary.
To elaborate on the job board changes mentioned in the post:
We will continue listing non-AI-related roles, but will be raising our bar. With some cause areas, we still consider them relevant to AGI (for example: pandemic preparedness). With others, we still think the top roles could benefit from talented people with great fit, so we’ll continue to post these roles.
We’ll be highlighting some roles more prominently. Even among the roles we post, we think the best roles can be much more impactful than others. Based on conversations with experts, we have some guess at which roles these are, and want to feature them a little more strongly.
I think the post would have been far better if this kind of sentiment was front and center. Obviously its still only a softener but it shows understanding and empathy the CEO has missed.
“I want to extend my sympathies to friends and organisations who feel left behind by 80k’s pivot in strategy. I’ve talked to lots of people about this change in order to figure out the best way for the job board to fit into this. In one of these talks, a friend put it in a way that captures my own feelings: I hate that this is the timeline we’re in.”
Hey Nick, just wanted to say thanks for this suggestion. We were trying to balance keeping the post succinct, but in retrospect I would have liked to have included more of the mood of Conor’s comment here without losing the urgency of the original post. I too hate that this is the timeline we’re in.
Appreciate this—perhaps this can be improved in other communications outside the forum context! Even in appealing to people outside of EA to focus on AI, I think this kind of sentiment might help.
Makes sense, seems like a good application of the principle of cause neutrality: being willing to update on information and focus on the most cost-effective cause areas.
From the perspective of someone who thinks AI progress is real and might happen quickly over the next decade, I am happy about this update. Barring Ezra Klein and the Kevin guy from NYT, the majority of mainstream media publications are not taking AI progress seriously, so hopefully this brings some balance to the information ecosystem.
From the perspective of “what does this mean for the future of the EA movement,” I feel somewhat negatively about this update. Non-AIS people within EA are already dissatisfied by the amount of attention, talent, and resources that are dedicated to AIS, and I believe this will only heighten that feeling.
So well said Akash nice one.
I’d love to hear in more detail about what this shift will mean for the 80,000 Hours Podcast, specifically.
The Podcast is a much-loved and hugely important piece of infrastructure for the entire EA movement. (Kudos to everyone involved over the years in making it so awesome—you deserve huge credit for building such a valuable brand and asset!)
Having a guest appear on it to talk about a certain issue can make a massive real-world difference, in terms of boosting interest, talent, and donations for that issue. To pick just one example: Meghan Barrett’s episode on insects seems to have been super influential. I’m sure that other people in the community will also be able to pick out specific episodes which have made a huge difference to interest in, and real-world action on, a particular issue.
My guess is that to a large extent this boosted activity and impact for non-AI issues does not “funge” massively against work on AI. The people taking action on these different issues would probably not have alternatively devoted a similar level of resources to AI safety-related stuff. (Presumably there is *some* funging going on, but my gut instinct is that it’s probably pretty low(?)) Non-AI-related content on the 80K Podcast has been hugely important for growing and energizing the whole EA movement and community.
Clearly, though, internally within the 80k team, there’s an opportunity cost to producing it, versus only doing AI content.
It feels like it would be absolutely awful—perhaps close to disastrous—for the non-AI bits of EA, and adjacent topics, if the Podcast were to only feature AI-related content in future. It won’t be completely obvious and salient that this is the effect. But, counterfactually, I think it will probably be really, really bad, going forward, to not have any new non-AI content on the Podcast.
It would be great to hear more about plans here. My guess (hope?!) is that it might still be advantageous to keep producing a range of content, in order to keep a broader listenership/wider “top-of-funnel”?
If the plan is to totally discontinue non-AI related content, I wonder if it would be possible to consider some steps that might be taken to ameliorate the effects of this on other issues and cause areas. For example, in a spirit of brainstorming, maybe 80k could allow other groups to record and release podcast episodes onto the 80k Podcast channel (or a new “vertical”/sub-brand of it)? (Obviously 80k would have a veto and only release stuff which they thought meets their high quality bar.) This feels like it could be really useful in terms of allowing non-AI groups to access the audience of the Podcast, whilst allowing 80k’s in-house resources to pivot to an AI focus.
Perhaps there are other cooperative options that could be considered along these lines if the plan is to only make AI content going forward.
I should stress again my admiration and gratitude for the 80k team in creating such a cool and valuable thing as the Podcast in the first place—I’m sure this sentiment is widely shared!
Thanks for your comment and appreciation of the podcast.
I think the short story is that yes, we’re going to be producing much less non-AI podcast content than we previously were — over the next two years, we tentatively expect ~80% of our releases to be AI/AGI focused. So we won’t entirely stop covering topics outside of AI, but those episodes will be rarer.
We realised that in 2024, only around 12 of the 38 episodes we released on our main podcast feed were focused on AI and its potentially transformative impacts. On reflection, we think that doesn’t match the urgency we feel about the issue or how much we should be focusing on it.
This decision involved very hard tradeoffs. It comes with major downsides, including limiting our ability to help motivate work on other pressing problems, along with the fact that some people will be less excited to listen to our podcast once it’s more narrowly focused. But we also think there’s a big upside: more effectively contributing to the conversation about what we believe is the most important issue of this decade.
On a personal level, I’ve really loved covering topics like invertebrate welfare, global health, and wild animal suffering, and I’m very sad we won’t be able to do as much of it. They’re still incredibly important and neglected problems. But I endorse the strategic shift we’re making and think it reflects our values. I’m also sorry it will disappoint some of our audience, but I hope they can understand the reasons we’re making this call.
There’s something I’d like to understand here. Most of the individuals that an AGI will affect will be animals, including invertebrates and wild animals. This is because they are very numerous, even if one were to grant them a lower moral value (although artificial sentience could be up there too). AI is already being used to make factory farming more efficient (the AI for Animals newsletter is more complete about that).
Is this an element you considered?
Some people in AI safety seem to consider only humans in the equation, while some assume that an aligned AI will, by default, treat them correctly. Conversely, some people push for an aligned AI that takes into account all sentient beings (see the recent AI for animals conference).
I’d like to know what will be 80k’s position on that topic? (if this is public information)
Thanks for asking. Our definition of impact includes non-human sentient beings, and we don’t plan to change that.
Thanks for the rapid and clear response, Luisa—it’s very much appreciated. I’m incredibly relieved and pleased to hear that the Podcast will still be covering some non-AI stuff, even it it’s less frequently than previously. It feels like those episodes have huge impact, including in worlds where we see a rapid AI-driven transformation of society—e.g. by increasing the chances that whoever/whatever wields power in the future cares about all moral patients, not just humans.
Hope you have fun making those, and all, future episodes :)
This is probably motivated reasoning on my part, but the more I think about this, I think it genuinely probably does make sense for 80k to try to maintain as big and broad an audience for the Podcast as possible, whilst also ramping up its AI content. The alternative would be to turn the Podcast effectively into an only-AI thing, which would presumably limit the audience quite a lot (?) I’m genuinely unsure what is the best strategy here, from 80k’s point of view, if the objective is something like “maximise listenership for AI related content”. Hopefully, if it’s a close call, they might err on the side of broadness, in order to be cooperative with the wider EA community.
I have a complicated reaction.
First, I think @NickLaing is right to point out that there’s a missing mood here and to express disappointment that it isn’t being sufficiently acknowledged.
2. My assumption is that the direction change is motivated by factors like:
A view of AI as a particularly time-sensitive area right now vs. areas like GHD often having a slower path to marginal impact (in part due to the excellence and strength of existing funding-constrained work).
An assumption that there are / will be many more net positions to fill in AI safety for the next few years, especially to the extent one thinks that funding will continue to shift in this direction. (Relatedly, one might think there will be relatively few positions to fill in certain other cause areas.)
I would suggest that these kinds of views and assumptions don’t imply that people who are already invested in other cause areas should shift focus. People who are already on a solid path to impact are not, as I understand it, 80K’s primary target audience.
3. I’m generally OK with 80K going in this direction if that is what its staff, leadership, and donors want. I’ve taken a harder-line stance on this sort of thing to the extent that I see something as core infrastructure that is a natural near-monopoly (e.g., the Forum, university groups) -- in which case I think there’s an enhanced obligation to share the commons. Here, there’s nothing inherent about career advising that is near-monopolistic (cf. Probably Good and Animal Advocacy Careers exist in analogous spaces). I would expect the new 80K to make at least passing reference to the existence of other EA career advice services for those who decide they want to work in another cause area. Thus, to the extent that there are advisors interested in giving advice in these areas, advisees interested in receiving that advice, and funders interested in supporting those areas, there’s no clear reason why alternative advisors would not fill the gap left by 80K here. I’d like to have seen more lead time, but get that the situation in AI is rapidly evolving and that this is a reaction to external developments.
4. I think part of the solution is to stop thinking of 80K as (quoting Nick’s comment) “one of the top 3 or so EA orgs” in the same sense one might have considered it before this shift. Of course, it’s an EA org in the same sense that (e.g.) Animal Advocacy Careers is an EA org, but after today’s announcement it shouldn’t be seen as a broad-tent EA org in the same vein as (e.g.,) GWWC. Therefore, we should be careful not to read a shift in the broader community’s cause prio into 80K’s statements or direction. This may change how we interact with it and defer (or not) to it in the future. For example, if someone wants to point a person toward broad-based career advice, Probably Good is probably the most appropriate choice.
5. I too am concerned about the EA funnel / onramp / tone-setting issues that EA others have written about, but don’t have much to add on those.
I love point 3 “to the extent that I see something as core infrastructure that is a natural near-monopoly (e.g., the Forum, university groups) [...] I think there’s an enhanced obligation to share the commons”—that’s a good articulation of something I feel about Forum stewardship.
I generally support the idea of 80k Hours putting more emphasis on AI risk as a central issue facing our species.
However, I think it’s catastrophically naive to frame the issue as ‘helping the transition to AGI go well’. This presupposes that there is a plausible path for (1) AGI alignment to be solved, for (2) global AGI safety treaties to be achieved and enforced in time, and for (3) our kids to survive and flourish in a post-AGI world.
I’ve seen no principled arguments to believe that any of these three things can be achieved. At all. And certainly not in the time frame we seem to have available.
So the key question is—if there is actually NO credible path for ‘helping the transition to AGI go well’, should 80k Hours be pursuing a strategy that amounts to a whole lot of cope, and rearranging deck chairs on the Titanic, and gives a false sense of comfort and security to AI devs, and EA people, and politicians, and the general public?
I think 80k Hours has done a lot of harm in the past by encouraging smart young EAs to join AI companies to try to improve their safety cultures form within. As far as I’ve seen, that strategy has been a huge failure for AI safety, and a huge win for immoral AI companies following a deeply cynical strategy of safety-washing their capabilities development. OpenAI, DeepMind, Anthropic, xAI have all made noise about AI risks… and they’ve all hired EAs… and they’ve carried on, at top speed, racing towards AGI.
Perhaps there was some hope, 10 years ago, that installing a cadre of X-risk-savvy EAs in the heart of the AI industry might overcome its reckless incentives to pursue capabilities over safety. I see no such hope any more. Capabilities work has accelerated far faster than safety work.
If 80k Hours is going to take AI risks seriously, its leadership team needs to face the possibility that there is simply no safe way to develop AGI—at least not for the next few centuries, until we have a much clearer understanding of how to solve AI alignment, including the very thorny game-theoretic complications of coordinating between billions of people and potentially trillions of AGIs.
And, if there is no safe way to develop AGI, let’s stop pretending that there is one. Pretending is dangerous. Pretending gives misleading signals to young researchers, and regulators, and ordinary citizens.
If the only plausible way to survive the push towards AGI is to entirely shut down the push towards AGI, that’s what 80k Hours needs to advocate. Not nudging more young talent into serving as ethical window-dressing and safety-washers for OpenAI and Anthropic.
Hey Geoffrey,
Niel gave a response to a similar comment below—I’ll just add a few things from my POV:
I’d guess that pausing (incl. for a long time) or slowing downAGI development would be good for helping AGI go well if it could be done by everyone / enforced / etc- so figuring out how to do that would be in scope re this more narrow focus. SO e.g. figuring out how an indefinite pause could work (maybe in a COVID-crisis like world where the overton window shifts?) seems helpful
I (& others at 80k) are just a lot less pessimistic vis a vis the prospects for AGI going well / not causing an existential catastrophe. So we just disagree about the premise that “there is actually NO credible path for ‘helping the transition to AGI go well’”. In my case maybe because I don’t believe your (2) is necessary (tho various other governance things probably are) & I think your (1) isn’t that unlikely to happen (tho very far from guaranteed!)
I’m at the same time more pessimistic about everyone the world stopping development toward this hugely commercially exciting technology, so feel like trying for that would be a bad strategy.
As an AI safety person who believes short timelines are very possible, I’m extremely glad to see this shift.
For those who are disappointed, I think it’s worth mentioning that I just took a look at the Probably Good website and it seems much better than the last time I looked. I had previously been a bit reluctant to recommend it, but it now seems like a pretty good resource and I’m sure they’ll be able to make it even better with more support.
Given that The 80,000 Hours Podcast is increasing its focus on AI, it’s worth highlighting Asterisk Magazine as a good resource for exploring a broader set of EA-adjacent ideas.
This seems a reasonable update, and I appreciate the decisiveness, and clear communication. I’m excited to see what comes of it!
Thanks for the update!
Where does this overall leave you in terms of your public association with EA? Many orgs (including ones that are not just focused on AIS) are trying to dissociate themselves from the EA brand due to reputational reasons.
80k is arguably the one org that has the largest audience from the “outside world”, while also having close ties with the EA community. Are you guys going to keep the status quo?
I will add my two cents on this in this footnote[1] too, but I would be super curious to hear your thoughts!
I think in the short term association with EA is not helpful for anyone that is trying to be taken seriously on the world stage, but it also comes with downsides. We would probably want to see if the “short timelines AGI” bet pays off by 2030 or so. If it doesn’t, the costs will start to outweigh the short-term gains. (In the meantime we should also invest more into EA PR)
At the same time, by 2030 AIS might also grow enough to not be reliant on EA in terms of funding and talent.
The most recent write up on our thinking on this is here (in addition to the comments about EA values in the post above). Our current plan is to continue with this approach.
I applaud the decision to take a big swing, but I think the reasoning is unsound and probably leads to worse worlds.
I think there are actions that look like “making AI go well” that actually are worse than not doing anything at all, because things like “keep human in control over AI” can very easily lead to something like value lock-in, or at least leaving it in the hands of immoral stewards. It’s plausible that if ASI is developed and still controlled by humans, hundreds of trillions of animals would suffer, because humans still want to eat meat from an animal. I think it’s far from clear that factors like faster alternative proteins development outweigh/outpace this risk—it’s plausible humans will always want animal meat instead of identical cultured meat for similar reasons to why some prefer human-created art over AI-created art.
If society had positive valence, I think redirecting more resources to AI and minimising x-risk are worth it, the “neutral” outcome may be plausibly that things just scale up to galactic scales which seems ok/good, and “doom” is worse than that. However, I think that when farmed animals are considered, civilisation’s valence is probably significantly negative. If the “neutral” option of scale up occurs, astronomical suffering seems plausible. This seems worse than “doom”.
Meanwhile, in worlds where ASI isn’t achieved soon, or is achieved and doesn’t lead to explosive economic growth or other transformative outcomes, redirecting people towards focusing on that instead of other cause areas probably isn’t very good.
Promoting a wider portfolio of career paths/cause areas seems more sensible, and more beneficial to the world.
One reason we use phrases “making AGI go well,” rather than some alternatives, is because 80k is concerned about risks like lock-in of really harmful values, in addition to human disempowerment and extinction risk — so I sympathise with your worries here.
Figuring out how to avoid these kinds of risks is really important, and recognising that they might arise soon is definitely within the scope of our new strategy. We have written about ways the future can look very bad even if humans have control of AI, for example here, here, and here.
I think it’s plausible to worry that not enough is being done about these kinds of concerns — that depends a lot on how plausible they are and how tractable the solutions are, which I don’t have very settled views on.
You might also think that there’s nothing tractable to do about these risks, so it’s better to focus on interventions that pay off in the short-term. But my view at least is that it is worth putting more effort into figuring out what the solutions here might be.
Thanks Cody. I appreciate the thoughtfulness of the replies given by you and others. I’m not sure if you were expecting the community response to be as it is.
My expressed thoughts were a bit muddled. I have a few reasons why I think 80k’s change is not good. I think it’s unclear how AI will develop further, and multiple worlds seem plausible. Some of my reasons apply to some worlds and not others. The inconsistent overlap is perhaps leading to a lack of clarity. Here’s a more general category of failure mode of what I was trying to point to.
I think in cases where AGI does lead to explosive outcomes soon, it’s suddenly very unclear what is best, or even good. It’s something like a wicked problem, with lots of unexpected second order effects and so on. I don’t think we have a good track record of thinking about this problem in a way that leads to solutions even on a first order effects level, as Geoffrey Miller highlighted earlier in the thread. In most of these worlds, what I expect will happen is something like:
Thinkers and leaders in the movement have genuinely interesting ideas and insights about what AGI could imply at an abstract or cosmic level.
Other leaders start working out what this actually implies individuals and organisations should do. This doesn’t work though, because we don’t know what we’re doing. Due to unknown unknowns, the most important things are missed, and because of the massive level of detail in reality, the things that are suggested are significantly wrong at load-bearing points. There are also suggestions in the spirit of “we’re not sure which of these directly opposing views X and Y are correct, and encourage careful consideration”, because it is genuinely hard.
People looking for career advice or organisational direction etc. try to think carefully about things, but in the end, most just use it to rationalise a messy choice they make between X and Y that they actually make based on factors like convenience, cost and reputational risk.
I think the impact of most actions here is basically chaotic. There are some things that are probably good, like trying to ensure it’s not controlled by a single individual. I also think “make the world better in meaningful ways in our usual cause areas before AGI is here” probably helps in many worlds, due to things like AI maybe trying to copy our values, or AI could be controlled by the UN or whatever and it’s good to get as much moral progress in there as possible beforehand, or just updates on the amount of morally aligned training data being used.
There are worlds where AGI doesn’t take off soon. I think that more serious consideration of the Existential Risk Persuasion Tournament leads one to conclude that wildly transformational outcomes just aren’t that likely in the short/medium term. I’m aware the XPT doesn’t ask about that specifically, but it seems like one of the better data points we have. I worry that focusing on things like expected value leads to some kind of Pascal’s mugging, which is a shame because the counterfactual—refusing to be mugged—is still good in this case.
I still think AI an issue worth considering seriously, dedicating many resources to addressing, etc. I think significant de-emphasis on other cause areas is not good. Depending on how long 80k make the change for, it also plausibly leads to new people not entering other causes areas in significant numbers for quite some time, which is probably bad in movement-building ways that is greater than the sum of its parts (fewer people leads to feelings of defeat, stagnation etc and few new people mean better, newer ideas can’t take over).
I hope 80k reverse this change after the first year or two. I hope that, if they don’t, it’s worth it.
Thanks for the additional context! I think I understand your views better now and I appreciate your feedback.
Just speaking for myself here, I think I can identify some key cruxes between us. I’ll take them one by one:
I disagree with this. I think it’s better if people have a better understanding of the key issues raised by the emergence of AGI. We don’t have all the answers, but we’ve thought about these issues a lot and have ideas about what kinds of problems are most pressing to address and what some potential solutions are. Communicating these ideas more broadly and to people who may be able to help is just better in expectation than failing to do so (all else equal), even though, as with any problem, you can’t be sure you’re making things better, and there’s some chance you make things worse.
I don’t think I agree with this. I think the value of doing work in areas like global health or helping animals is largely in the direct impact of these actions, rather than any impact on what it means for the arrival of AGI. I don’t think even if, in an overwhelming success, we cut malaria deaths in half next year, that will meaningfully increase the likelihood that AGI is aligned or that the training data reflects a better morality. It’s more likely that directly trying to work to create beneficial AI will have these effects. Of course, the case for saving lives from malaria is still strong, because people’s lives matter and are worth saving.
Recall that the XPT is from 2022, so there’s a lot that’s happened since. Even still, here’s what Ezra Karger noted about expectations of the experts and forecasters views when we interviewed him on the 80k podcast:
My understanding is that XPT was using the definition of AGI used in the Metaculus question cited in Niel’s original post (though see his comment for some caveats about the definition). In March 2022, that forecast was around 2056-2058; it’s now at 2030. The Metaculus question also has over 1500 forecasters, whereas XPT had around 30 superforecasters, I believe. So overall I wouldn’t consider XPT to be strong evidence against short timelines.
I think there is some general “outside view” reason to be sceptical of short timelines. But I think there are good reasons to think that kind of perspective would miss big changes like this, and there is enough reason to believe short timelines are plausible to take action on that basis.
Again, thanks for engaging with all this!
Thanks for the transparency! This is really helpful for coordination.
For anyone interested in what 80k is deprioritizing, this comment section might be a good space to pitch other EA career support ideas and offer support.
There might be space for an organization specifically focussed in high school graduates to help them decide whether, where and what to study. This might be the most important decision in one’s life, especially for people like me who grew up on the countryside without really any intellectual role models and are open to moving abroad for their studies.
I’m unlikely to prioritize this any time soon, but if anyone else wants to set something up, I might be able to advice and maybe help fundraise, support etc, just message or email me (contact on linktr.ee/manuelallgaier)
Perhaps this is a bit tangential, but I wanted to ask since the 80k team seem to be reading this post. How have 80k historically approached the mental health effects of exposing younger (i.e. likely to be a bit more neurotic) people to existential risks? I’m thinking in the vein of Here’s the exit. Do you/could you recommend alternate paths or career advice sites for people who might not be able to contribute to existential risk reduction due to, for lack of a better word, their temperament? (Perhaps a similar thing for factory farming, too?)
For example, I think I might make a decent enough AI Safety person and generally agree it could be a good idea, but I’ve explicitly chosen not to pursue it because (among other reasons) I’m pretty sure it would totally fry my nerves. The popularity of that LessWrong post suggests that I’m not alone, and also raises the interesting possibility that such people might end up actively detracting from the efforts of others, rather than just neutrally crashing out.
We don’t have anything written/official on this particular issue I don’t think (though we have covered other mental health topics here), though this is one reason why we don’t think it’s the case that everyone should work on AIS/trying to help things go well with AGI, such that even though we want to encourage more people to consider it, we don’t blanket recommend it to everyone. We wrote a little bit here about an issue that seems related—what to do if you find the case for an issue intellectually compelling but don’t feel motivated by it.
You’re shifting your resources, but should you change your branding?
Focusing on new articles and research about AGI is one thing, but choosing to brand yourselves as an AI-focused career organisation is another.
Personal story (causal thinking): I first discovered the EA principles while researching how to do good in my career, where, aside from 80k, all the well-ranked websites were non-impact focused. If the website had been specifically about AI or existential risk careers, I’m quite sure I would’ve skipped it and spent years not discovering EA principles. But by discovering those principles and diving deeper into the content, I eventually saw existential risk as a top priority. Last year, the biggest chunk of my donations went to AI. I also managed the translation of your guide into French, and now, through Mieux Donner (the French effective giving initiative I co-founded), we’ll likely raise donations to fund several AI positions.
Trade-off (statistical thinking): How many people might be deterred from engaging with EA and this AGI topic because of AI branding? How many people are not working in AI because your homepage, About Us and menu bar mention other cause areas? (Especially considering your next career guide will still be multi-cause, and the information shouldn’t have time to become outdated given the AGI timeline you mentioned.)
Your focus seems well thought out, but my guess regarding the branding is that you shouldn’t change it.
By shifting to a narrow AI focus, you risk reducing by 13.5% the source of effective do-gooders (including donors!) is one negative consequence. However, I can also think of other potential downsides:
Damage to EA’s reputation: This could feed into TESCREAL critics and prevent presenting EA principles to many potential supporters.
Potential lost opportunities: Even for your own organisation, focusing solely on AGI could cause you to lose out on backlinks, partnerships, and references that bring a steady flow of people.
As I read through your post, I’m still uncertain about what you plan with your branding. However, staying as a nonprofit that helps people use their careers to solve the world’s most pressing problems, while focusing the majority of resources on AGI but maintaining low-hanging fruits in other areas, seems to me to have a more positive impact than shifting your branding entirely.
So please, don’t mess up the communication—it could have a net-negative effect on all the cause areas.
Hi Romain,
Thanks for raising these points (and also for your translation!)
We are currently planning to retain our cause-neutral (& cause-opinionated), impactful careers branding, though we do want to update the site to communicate much more clearly and urgently our new focus on helping things go well with AGI, which will affect our brand.
How to navigate the kinds of tradeoffs you are pointing to is something we will be thinking about more as we propagate through this shift in focus through to our most public-facing programmes. We don’t have answers just yet on what that will look like, but do plan to take into account feedback from users on different framings to try to help things resonate as well as we can, e.g. via A/B tests and user interviews.
I would lean the other way, at least in some comms. You wouldn’t want people to think that (e.g.) “the career guidance space in high impact global health and wellbeing is being handled by 80k”. Changing branding could more clearly open opportunities for other orga to enter spaces like that.
Here is a simple argument that this strategic shift is a bad one:
(1) There should be (at least) one EA org that gives career advice across cause areas.
(2) If there should be such an org, it should be (at least also) 80k.
(3) Thus, 80k should be an org that gives career advice across cause areas.
(Put differently, my reasoning is something like this: Should there be an org like the one 80k has been so far? Yes, definitely! But which one should it be? How about 80k!?)
I’m wondering with which premise 80k disagrees (and what you think about them!). They are indicating in this post that they think it would be valuable to have orgs that cover other individual cause areas such as biorisk. But I think there is strong case for having an org that is not restricted to specific cause areas. After all, we don’t want to do the most good in cause area X but the most good, period.
At the same time, 80k seems like a great candidate for such a cause-neutral org. They have done great work so far (as far as I can tell), and they have built up valuable resources (experience, reputation, outputs, …) through this work that would help them doing even better in the future.
But one could also reason:
(1) There should be (at least) one EA org focused on AI risk career advice; it is important that this org operate at a high level at the present time.
(2) If there should be such an org, it should be—or maybe can only be -- 80K; it is more capable of meeting criterion (1) quickly than any other org that could try. It already has staff with significant experience in the area and organizational competence to deliver career advising services with moderately high throughput.
(3) Thus, 80K should focus on AI risk career advice.
If one generally accepts both your original three points and these three, I think they are left with a tradeoff to make, focusing on questions like:
If both versions of statement (1) cannot be fulfilled in the next 1-3 years (i.e., until another org can sufficiently plug whichever hole 80K didn’t fill), which version is more important to fulfill during that time frame?
Given the capabilities and limitations of other orgs (both extant and potential future), would it be easier for another org to plug the AI-focused hole or the general hole?
Good reply! I thought of something similar as a possible objection against my premise (2) that 80k should fill the role of the cause-neutral org. Basically, there are opportunity costs to 80k filling this role because it could also fill the role of (e.g.) an AI-focused org. The question is how high these opportunity costs are and you point out two important factors. What I take to be important, and plausibly decisive, is that 80k is especially well suited to fill the role of the cause-neutral org (more so than the role of the AI-focused org) due to its biography and the brand it has built. Combined with a ‘global’ perspective on EA according to which there should be one such org, it seems plausible to me that it should be 80k.
Yes, and 80k think that AI safety is the cause area that leads to the most good. 80k never covered all cause areas—they didn’t cover the opera or beach cleanup or college scholarships or 99% of all possible cause areas. They have always focused on what they thought were the most important cause areas, and they continue to do so. Cause neutrality doesn’t mean ‘supporting all possible causes’ (which would be absurd), it means ‘being willing at support any cause area, if the evidence suggests it is the best’.
Arden from 80k here—just flagging that most of 80k is currently asleep (it’s midnight in the UK), so we’ll be coming back to respond to comments tomorrow! I might start a few replies, but will be getting on a plane soon so will also be circling back.
Will this affect the 80k job board?
Will you continue to advertise jobs in all top cause areas equally, or will the bar for jobs not related to AI safety be higher now?
If the latter, is there space for an additional, cause-neutral job board that could feature all 80k-listed jobs and more from other cause areas?
Hey Manuel,
I would not describe the job board as currently advertising all cause areas equally, but yes, the bar for jobs not related to AI safety will be higher now. As I mention in my other comment, the job board is interpreting this changed strategic focus broadly to include biosecurity, nuclear security, and even meta-EA work—we think all of these have important roles to play in a world with a short timeline to AGI.
In terms of where we’ll be raising the bar, this will mostly affect global health, animal welfare, and climate postings — specifically in terms of the effort we put into finding roles in these areas. With global health and animal welfare, we’re lucky to have great evaluators like GiveWell and great programs like Charity Entrepreneurship to help us find promising orgs and teams. It’s easy for us to share these roles, and I remain excited to do so. However, part of our work involves sourcing for new roles and evaluating borderline roles. Much of this time will shift into more AIS-focused work.
Cause-neutral job board: It’s possible! I think that our change makes space for other boards to expand. I also think that this creates something of a trifecta, to put it very roughly: The 80k job board with our existential risk focus, Probably Good with a more global health focus, and Animal Advocacy Careers with an animal welfare focus. It’s possible that effort put into a cause-neutral board could be better put elsewhere, given that there’s already coverage split between these three.
Ok, so in the spirit of
[about p(doom|AGI)], and
[is lacking], I ask if you have seriously considered whether
is even possible? (Let alone at all likely from where we stand.)
You (we all) should be devoting a significant fraction of resources toward slowing down/pausing/stopping AGI (e.g. pushing for a well enforced global non-proliferation treaty on AGI/ASI), if we want there to be a future at all.
Hey Greg! I personally appreciate that you and others are thinking hard about the viability of giving us more time to solve the challenges that I expect we’ll encounter as we transition to a world with powerful AI systems. Due to capacity constraints, I won’t be able to discuss the pros and cons of pausing right now. But as a brief sketch of my current personal view: I agree it’d be really useful to have more time to solve the challenges associated with navigating the transition to a world with AGI, all else equal. However, I’m relatively more excited than you about other strategies to reduce the risks of AGI, because I’m worried about the tractability of a (really effective) pause. I’d also guess my P(doom) is lower than yours.
Hi Niel, what I’d like to see is an argument for the tractability of successfully “navigating the transition to a world with AGI” without a global catastrophe (or extinction) (i.e. an explanation for why your p(doom|AGI) is lower). I think this is much less tractable than getting a (really effective) Pause! (Even if a Pause itself is somewhat unlikely at this point.)
I think most people in EA have relatively low (but still macroscopic) p(doom)s (e.g. 1-20%), and have the view that “by default, everything turns out fine”. And I don’t think this has ever been sufficiently justified. The common view is that alignment will just somehow be solved enough to keep us alive, and maybe even thrive (if we just keep directing more talent and funding to research). But then the extrapolation to the ultimate implications of such imperfect alignment (e.g. gradual disempowerment → existential catastrophe) never happens.
Is there a possible world to “divest” or “spin out” the non-AI work of 80k hours org? I understand that this in and of itself could be a huge haul and may defeat the purpose of the re-alignment of values—but could the door remain open to this if someone/another org expressed interest?
i’m selfishly in favor of this change. my question is: will 80k rebrand itself, perhaps to “N k hours (where 1 < N < 50)”?
This is why organizations need diversity, because this is exactly what happens when you have an organization that has the western developed mindset. The pallete of ideas and viewpoints is clearly lacking.
The AGI delivery by 2030 will fail, we barely have resources to properly run these current LLM models, and the AGI will surely be much more complex, if possible even. Congratulations you have now jumped on a trend-wagon that will take us nowhere, while forgetting about actual causes that are relevant right now.
It’s a shame that the same people that built you up, shaped you as an organization and gave you credibility, are now put on the pier, left behind.