What 80k programmes will be delivering in the near-term
In response to questions that we and CEA have received about how, and to what extent, our programme delivery will change as a result of our new strategic focus, we wanted to give a tentative indication of our programme’s plans over the coming months.
The following is our current guess of what we’re going to be doing in the short term. It’s quite zoomed in on the things that are or aren’t changing as a result of our strategic update, rather than going into detail on: a) what things we’ve decided not to prioritise, even though we think they’d be valuable for others to work on; b) things which aren’t affected by our strategy very much (such as our operations functions).
It’s also written in the context of 80k still thinking through our plans — so we’re not able (or trying) to give a firm commitment of what we’ll definitely do or not do. Despite our uncertainty, we thought it’d be useful to share the tentative plans that we have here – so that people considering what to work on or whether to recommend 80k’s resources have an idea what to expect from us.
~
To be clear, we think it’s an unspeakable travesty that we live in a world where there is so much preventable suffering and death going unaddressed. The following is a concise statement of our priorities, but should not be taken as an indication that we think it’s anything other than a tragedy that so much triage is needed.
We would love it if our programmes could continue to deliver resources focusing on a wider breadth of impactful cause areas, but we think unfortunately the situation with AI is severe and urgent enough that we need to prioritise using our capacity to help with it.
In writing this, we hope that we can help others to figure out where the gaps left by 80k are likely to be, so that they are easier to fill – and to also understand how 80k might still be useful to them / their groups.
~
Web
User flow — Historically, and in our upcoming plans, our site user flow takes new users to the career guide — a primarily a principles-first framing on impactful careers. We expect to keep this user flow for the immediate future, though we might:
Update the guide to bring AI safety up sooner / more prominently (though we overall expect it to remain a principles-first resource)
Introduce a second user flow, targeting users who reached 80k with an existing interest in helping AI to go well.
Broad site framing — We’re currently planning a project of updating our site to reflect more “front-and-centre” our prioritisation of AI and the urgency we think should be afforded to it. That said, we expect to maintain our overall “impactful careers” high-level focus and as the initial framing people encounter when reaching the site via our front page for the first time. We continue to view EA principles as important for pursuing high impact careers including in AI safety and policy, so plan to continue to highlight them.
New publications — Going forward, we’re planning to increase the proportion of new content that focus on AI safety relevant topics. To do this at the standard we’d like to, we’ll need to stop writing new content on non-AI-safety content.
As mentioned in our post, we think the topics that are relevant here are “relatively diverse and expansive, including intersections where AI increases risks in other cause areas, such as biosecurity”.
Existing content — As mentioned, we plan for our existing web content to remain accessible to users, though non-AI topics will not be featured or promoted as prominently in the future.
Podcast
We expect ~80% of our podcast episodes to be focused on AI. In the last 2 years, ~40% of our main-feed content has been AI focused.
As you might have seen, the podcast team is also hoping to hire another host and a chief of staff to scale up the team to allow them to more comprehensively cover AGI developments, risks, and governance.
Advising
Broadly speaking, who our advisors speak to isn’t going to change very much (though our bar might raise somewhat). For the last few years, we’ve already been accepting advisees on the basis of their interest in working on our top pressing problems, (especially mitigating risks from AI, as described here), and refer promising applicants who are interested in an area we have less expertise in to other services / resources / connections.
Huon discussed this more here, in particular:
“We still plan to talk to people considering work on any of our top problems (which includes animal welfare), and I believe we still have a lot of useful advice on how to pursue careers in these areas.
However, we will be applying a higher bar to applicants that aren’t primarily interested in working on AI.”
Job board
Along with slightly raising our bar for jobs not related to AI safety, we’ll be moving to more automated curation of global health and development, climate change, and animal welfare roles, so that we can spend more of our human-curation time on AI and relevant areas. This means that we’ll be relying more on external evaluators like GiveWell, meaning that our coverage might be worse in areas where good evaluators don’t exist. Overall, we’ll continue to list roles in these areas, but likely fewer than before.
Headhunting
Our headhunting service has historically been AI-focused due to capacity constraints, and will continue to be.
Video
Our video programme is new, and we’re still in the process of establishing its strategy. In general, we do expect it to focus on topics relevant to making AGI go well.
Rob Wiblin interviewed nuclear‑war planner‑turned‑whistle‑blower Daniel Ellsberg, five years before he died. Here’s a quote from the interview: