Anecdote: I’m one of those people—would say I’d barely heard of ea / basically didn’t know what it was, before a friend who already knew of it suggested I come to an EA global (I think at the time one got a free t-shirt for referring friends). We were both philosophy students & I studied ethics, so I think he thought I might be intersted even though we’d never talked about EA.
Arden Koehler
Thanks as always for this valuable data!
Given 80k is a large and growing source of people hearing about and getting involved in EA, some people reading this might be worried that 80k will stop contributing to EA’s growth, given our new strategic focus on helping people work on safely navigating the transition to a world with AGI.
tl;dr I don’t think it will stop, and might continue as before, though it’s possible it will be reduced some.
More:I am not sure whether 80k’s contribution to building ea in terms of sheer numbers of people getting involved is likely to go down due to this focus vs. what it would otherwise be if we simply continued to scale our programmes as they currently are without this change in direction.
My personal guess at this time is that it will reduce at least slightly.
Why would it?
We will be more focused on helping people work on helping AGI go well—that means that e.g. university groups might be hesitant to recommend us to members who are not interested in AIS as a cause area
At a prosaic level, some projects that would have been particularly useful for building EA vs. helping with AGI in a more targeted way are going to be de-prioritised—e.g. I personally dropped a project I began of updating our “building ea” problem profile in order to focus more on AGI targeted things
Our framings will probably change. It’s possible that the framings we use more going forward will emphasise EA style thinking a little less than our current ones, though this is something we’re actively unsure of.
We might sometimes link off to the AI safety community in places where we might have linked off to EA before (though it is much less developed, so we’re not sure).
However, I do expect us to continue to significantly contribute to building EA – and we might even continue to do so at a similar level vs. before. This is for a few reasons:
We still think EA values are important, so still plan to talk about them a lot. E.g. we will talk about *why* we’re especially concerned about AGI using EA-style reasoning, emphasise the importance of impartiality and scope sensitivity, etc.
We don’t currently have any plans for reducing our links to the ea community – e.g. we don’t plan to stop linking to the EA forum, or stop using our newsletter to notify people about EAGs.
We still plan to list meta EA jobs on our job board, put advisees in touch with people from the EA community when it makes sense, and by default keep our library of content online
We’re not sure whether, in terms of numbers, the changes we’re making will cause our audience to grow or shrink. On the one hand, it’s a more narrow focus, so will appeal less to people who aren’t interested in AI. On the other, we are hoping to appeal more to AI-interested people, as well as older people, who might not have been as interested in our previous framings.
This will probably lead directly and indirectly to a big chunk of our audience continuing to get involved in EA due to engaging with us. This is valuable according to our new focus, because we think that getting involved in EA is often useful for being able to contribute positively to things going well with AGI.
To be clear, we also think EA growing is valuable for other reasons (we still think other cause areas matter, of course!). But it’s actually never been an organisational target[1] of ours to build EA (or at least it hasn’t since I joined the org 5 years ago); growing EA has always been something we cause as a side effect of helping people pursue high impact careers (because, as above, we’ve long thought that getting involved in EA is one useful step for pursuing a high impact career!)
Note on all the above: the implications of our new strategic focus for our programmes are still being worked out, so it’s possible that some of this will change.
Also relevant: FAQ on the relationship between 80k & EA (from 2023 but I still agree with it)
[1] Except to the extent that helping people into careers building EA constitutes helping them pursue a high impact career - & it is one of many ways of doing that (along with all the other careers we recommend on the site, plus others). We do also sometimes use our impact on the growth of EA as one proxy for our total impact, because the data is available, and we think it’s often a useful step to having an impactful career, & it’s quite hard to gather data on people we’ve helped pursue high impact careers more directly.
Hey Geoffrey,
Niel gave a response to a similar comment below—I’ll just add a few things from my POV:
I’d guess that pausing (incl. for a long time) or slowing downAGI development would be good for helping AGI go well if it could be done by everyone / enforced / etc- so figuring out how to do that would be in scope re this more narrow focus. SO e.g. figuring out how an indefinite pause could work (maybe in a COVID-crisis like world where the overton window shifts?) seems helpful
I (& others at 80k) are just a lot less pessimistic vis a vis the prospects for AGI going well / not causing an existential catastrophe. So we just disagree about the premise that “there is actually NO credible path for ‘helping the transition to AGI go well’”. In my case maybe because I don’t believe your (2) is necessary (tho various other governance things probably are) & I think your (1) isn’t that unlikely to happen (tho very far from guaranteed!)
I’m at the same time more pessimistic about everyone the world stopping development toward this hugely commercially exciting technology, so feel like trying for that would be a bad strategy.
We don’t have anything written/official on this particular issue I don’t think (though we have covered other mental health topics here), though this is one reason why we don’t think it’s the case that everyone should work on AIS/trying to help things go well with AGI, such that even though we want to encourage more people to consider it, we don’t blanket recommend it to everyone. We wrote a little bit here about an issue that seems related—what to do if you find the case for an issue intellectually compelling but don’t feel motivated by it.
Hi Romain,
Thanks for raising these points (and also for your translation!)
We are currently planning to retain our cause-neutral (& cause-opinionated), impactful careers branding, though we do want to update the site to communicate much more clearly and urgently our new focus on helping things go well with AGI, which will affect our brand.
How to navigate the kinds of tradeoffs you are pointing to is something we will be thinking about more as we propagate through this shift in focus through to our most public-facing programmes. We don’t have answers just yet on what that will look like, but do plan to take into account feedback from users on different framings to try to help things resonate as well as we can, e.g. via A/B tests and user interviews.
Thanks for the feedback here. I mostly want to just echo Niel’s reply, which basically says what I would have wanted to. But I also want to add for transparency/accountability’s sake that I reviewed this post before we published it with the aim of helping it communicate the shift well – I focused mostly on helping it communicate clearly and succinctly, which I do think is really important, but I think your feedback makes sense, and I wish that I’d also done more to help it demonstrate the thought we’ve put into the tradeoffs involved and awareness of the costs. For what it’s worth, & we don’t have dedicated comms staff at 80k—helping with comms is currently part of my role, which is to lead our web programme.
Adding a bit more to my other comment:
For what it’s worth, I think it makes sense to see this as something of a continuation of a previous trend – 80k has for a long time prioritised existential risks more than the EA community as a whole. This has influenced EA (in my view, in a good way), and at the same time EA as a whole has continued to support work on other issues. My best guess is that that is good (though I’m not totally sure—EA as a whole mobilising to help things go better with AI also sounds like it could be really positively impactful).
From an altruistic cause prioritization perspective, existential risk seems to require longtermism, including potentially fanatical views (see Christian Tarsney, Rethink Priorities). It seems like we should give some weight to causes that are non-fanatical.
I think that existential risks from various issues with AGI (especially if one includes trajectory changes) are high enough that one needn’t accept fanatical views to prioritise them (though it may require caring some about potential future beings). (We have a bit on this here)
Existential risk is not most self-identified EAs’ top cause, and about 30% of self-identified EAs say they would not have gotten involved if it did not focus on their top cause (EA survey). So it does seem like you miss an audience here.
I agree this means we will miss out on an audience we could have if we fronted content on more causes. We hope to also appeal to new audiences with this shift, such as older people who are less naturally drawn to our previous messaging, and e.g. who are more motivated by urgency. However, it seems plausible this shrinks our audience. This seems worth it because we think in doing so we’ll be telling people what we think about how urgent and pressing AI risks seem to us, and that this could still lead us to having more impact overall since impact varies so much between careers, in part based on what causes people focus on.
Hi Håkon, Arden from 80k here.
Great questions.
On org structure:
One question for us is whether we want to create a separate website (“10,000 Hours?”), that we cross-promote from the 80k website, or to change the 80k website a bunch to front the new AI content. That’s something we’re still thinking about, though I am currently weakly leaning toward the latter (more on why below). But we’re not currently thinking about making an entire new organisation.
Why not?
For one thing, it’d be a lot of work and time, and we feel this shift is urgent.
Primarily, though, 80,000 Hours is a cause-impartial organisation, and we think that means prioritising the issues we think are most pressing (& telling our audience about why we think that.)
What would be the reason for keeping one 80k site instead of making a 2nd separate one?
As I wrote to Zach above, I think the site currently doesn’t represent the possibility of short timelines or the variety of risks AI poses well, even though it claims to be telling people key information they need to know to have a high impact career. I think that’s key information, so want it to be included very prominently.
As a commenter noted below, it’d take time and work to build up an audience for the new site.
But I’m not sure! As you say, there are reasons to make a separate site as well.
On EA pathways: I think Chana covered this well – it’s possible this will shrink the number of people getting into EA ways of thinking, but it’s not obvious. AI risk doesn’t feel so abstract anymore.
On reputation: this is a worry. We do plan to express uncertainty about whether AGI will indeed progress as quickly as we worry it will, and that if people pursue a route to impact that depends on fast AI timelines, that’s making a bet that might not pay off. However, we think it’s important both for us & for our audience to act under uncertainty, using rules of thumb but also thinking about expected impact.
In other words – yes, our reputation might suffer from this if AI progresses slowly. If that happens, it will probably be worse for our impact, but better for the world, and I think I’ll still feel good about expressing our (uncertain) views on this matter when we had them.
Hey Zach. I’m about to get on a plane so won’t have time to write a full response, sorry! But wanted to say a few quick things before I do.
Agree that it’s not certain or obvious that AI risk is the most pressing issue (though it is 80k’s best guess & my personal best guess, and I don’t personally have the view that it requires fanaticism.) And I also hope the EA community continues to be a place where people work on a variety of issues—wherever they think they can have the biggest positive impact.
However, our top commitment at 80k is to do our best to help people find careers that will allow them to have as much positive impact as they can. & We think that to do that, more people should strongly consider and/or try out working on reducing the variety of risks that we think transformative AI poses. So we want to do much more to tell them that!
In particular, from a web specific perspective, I feel that the website doesn’t feel consistent right now with the possibility of short AI timelines & the possibility that AI might not only pose risks from catastrophic misalignment, but also other risks, plus that it will probably affect many other cause areas. Given the size of our team, I think we need to focus our new content capacity on changing that.
I think this post I wrote a while ago might also be relevant here!
Will circle back more tomorrow / when I’m off the flight!
Arden from 80k here—just flagging that most of 80k is currently asleep (it’s midnight in the UK), so we’ll be coming back to respond to comments tomorrow! I might start a few replies, but will be getting on a plane soon so will also be circling back.
I agree with this − 80,000 Hours made this change about a year ago.
Should you work at a frontier AI company?
Carl Shulman questioned the tension between AI welfare & AI safety on the 80k podcast recently—I thought this was interesting! Basically argues AI takeover could be even worse for AI welfare. From the end of the section.
Rob Wiblin: Maybe a final question is it feels like we have to thread a needle between, on the one hand, AI takeover and domination of our trajectory against our consent — or indeed potentially against our existence — and this other reverse failure mode, where humans have all of the power and AI interests are simply ignored. Is there something interesting about the symmetry between these two plausible ways that we could fail to make the future go well? Or maybe are they just actually conceptually distinct?
Carl Shulman: I don’t know that that quite tracks. One reason being, say there’s an AI takeover, that AI will then be in the same position of being able to create AIs that are convenient to its purposes. So say that the way a rogue AI takeover happens is that you have AIs that develop a habit of keeping in mind reward or reinforcement or reproductive fitness, and then those habits allow them to perform very well in processes of training or selection. Those become the AIs that are developed, enhanced, deployed, then they take over, and now they’re interested in maintaining that favourable reward signal indefinitely.
Then the functional upshot is this is, say, selfishness attached to a particular computer register. And so all the rest of the history of civilisation is dedicated to the purpose of protecting the particular GPUs and server farms that are representing this reward or something of similar nature. And then in the course of that expanding civilisation, it will create whatever AI beings are convenient to that purpose.
So if it’s the case that, say, making AIs that suffer when they fail at their local tasks — so little mining bots in the asteroids that suffer when they miss a speck of dust — if that’s instrumentally convenient, then they may create that, just like humans created factory farming. And similarly, they may do terrible things to other civilisations that they eventually encounter deep in space and whatnot.
And you can talk about the narrowness of a ruling group and say, and how terrible would it be for a few humans, even 10 billion humans, to control the fates of a trillion trillion AIs? It’s a far greater ratio than any human dictator, Genghis Khan. But by the same token, if you have rogue AI, you’re going to have, again, that disproportion.
Thanks for this valuable reminder!
btw, the link on “more about legal risks” at the top goes to the wrong place.
Cool project—I tried to subscribed to the podcast, to check it out. But I couldn’t find it on pocketcasts, so I didn’t (didn’t seem worth me using a 2nd platform).
I wanted to subscribe because I’ve wanted an audio feed that will help me be in touch with events outside my more specific areas of interest that i hear about through niche channels while I commute, while not going quite as broad / un-curated as the BBC news (which I currently use for this) -- and this seemed like potentially a good middle ground.
tiny other feedback: the title feels aggressive to me vs. some nearby alternatives (e.g. just “relevance news” or something) - since it nearly states that anything that is not there is not actually relevant at all, which is a fairly strong claim I could see people getting unhappy about.
The project aligns closely with the fund’s vision of a “principles-first EA” community, we’d be excited for the EA community’s outputs to look more like Richard’s.
Is this saying that the move to principle’s first EA as a strategic perspective for EAF goes with a belief that more EA work should be “principles first” & not cause specific? (so that more of the community’s outputs look like Richard’s)? I wouldn’t have necessarily inferred that just from the fact that you’re making this strategic shift (could be ore of a comp advantage / focus thing) so wanted to clarify.
Speaking in a personal capacity here --
We do try to be open to changing our minds so that we can be cause neutral in the relevant sense, and we do change our cause rankings periodically and spend time and resources thinking about them (in fact we’re in the middle of thinking through some changes now). But how well set up are we, institutionally, to be able to in practice make changes as big as deprioritising risks from AI if we get good reasons to? I think this is a good question, and want to think about it more. So thanks!
Just want to say here (since I work at 80k & commented abt our impact metrics & other concerns below) that I think it’s totally reasonable to:
Disagree with 80,000 Hours’s views on AI safety being so high priority, in which case you’ll disagree with a big chunk of the organisation’s strategy.
Disagree with 80k’s views on working in AI companies (which, tl;dr, is that it’s complicated and depends on the role and your own situation but is sometimes a good idea). I personally worry about this one a lot and think it really is possible we could be wrong here. It’s not obvious what the best thing to do here is, and we discuss this a bunch internally. But we think there’s risk in any approach to issue, and are going with our best guess based on talking to people in the field. (We reported on some of their views, some of which were basically ‘no don’t do it!’, here.)
Think that people should prioritise personal fit more than 80k causes them to. To be clear, we think (& 80k’s content emphasises) that personal fit matters a lot. But it’s possible we don’t push this hard enough. Also, because we think it’s not the only thing that matters for impact (& so also talk a lot about cause and intervention choice), we tend to present this as a set of considerations to navigate that involves some trade-offs. So It’s reasonable to think that 80k encourages too much trading off of personal fit, at least for some people.
Hey, Arden from 80,000 Hours here –
I haven’t read the full report, but given the time sensitivity with commenting on forum posts, I wanted to quickly provide some information relevant to some of the 80k mentions in the qualitative comments, which were flagged to me.
Regarding whether we have public measures of our impact & what they show
It is indeed hard to measure how much our programmes counterfactually help move talent to high impact causes in a way that increases global welfare, but we do try to do this.
From the 2022 report the relevant section is here. Copying it in as there are a bunch of links.
We primarily use six sources of data to assess our impact:
The 80,000 Hours user survey. A summary of the 2022 user survey is linked in the appendix.
Our in-depth case study analyses, which produce our top plan changes and DIPY estimates (last analysed in 2020).
Our own data about how users interact with our services (e.g. our historical metrics linked in the appendix).
Our and others’ impressions of the quality of our visible output.
Overall, we’d guess that 80,000 Hours continued to see diminishing returns to its impact per staff member per year. [But we continue to think it’s still cost-effective, even as it grows.]
Some elaboration:
DIPY estimates are our measure of contractual career plan shifts we think will be positive for the world. Unfortunately it’s hard to get an accurate read on counterfactuals and response rates, so these are only very rough estimates & we don’t put that much weight on them.
We report on things like engagement time & job board clicks as *lead metrics* because we think they tend to flow through to counterfactual high impact plan changes, & we’re able to measure them much more readily.
Headlines from some of the links above:
From our own survey (2138 respondents):
On the overall social impact that 80,000 Hours had on their career or career plans,
1021 (50%) said 80,000 Hours increased their impact
Within this we identified 266 who reported >30% chance of 80,000 Hours causing them to taking a new jobs or graduate course (a “criteria based plan change”)
26 (1%) said 80,000 Hours reduced their impact.
Themes in answers were demoralisation and causing career choices that were a poor fit
Open Philanthropy’s EA/LT survey was aimed at asking their respondents ““What was important in your journey towards longtermist priority work?” – it has a lot of different results and feels hard to summarise, but it showed a big chunk of people considered 80k a factor in ending up working where they are.
The 2020 EA survey link says “More than half (50.7%) of respondents cited 80,000 Hours as important for them getting involved in EA”. (2022 says something similar)
Regarding the extent to which we are cause neutral & whether we’ve been misleading about this
We do strive to be cause neutral, in the sense that we try to prioritize working on the issues where we think we can have the highest marginal impact (rather than committing to a particular cause for other reasons).
For the past several years we’ve thought that the most pressing problem is AI safety, so have put much of our effort there (Some 80k programmes focus on it more than others – I reckon for some it’s a majority, but it hasn’t been true that as an org we “almost exclusively focus on AI risk.” (a bit more on that here.))
In other words, we’re cause neutral, but not cause *agnostic* - we have a view about what’s most pressing. (Of course we could be wrong or thinking about this badly, but I take that to be a different concern.)The most prominent place we describe our problem prioritization is our problem profiles page – which is one of our most popular pages. We describe our list of issues this way: “These areas are ranked roughly by our guess at the expected impact of an additional person working on them, assuming your ability to contribute to solving each is similar (though there’s a lot of variation in the impact of work within each issue as well). (Here’s also a past comment from me on a related issue.)
Regarding the concern about us harming talented EAs by causing them to choose bad early career jobs
To the extent that this has happened this is quite serious – helping talented people have higher impact careers is our entire point! I think we will always sometimes fail to give good advice (given the diversity & complexity of people’s situations & the world), but we do try to aggressively minimise negative impacts, and if people think any particular part of our advice is unhelpful, we’d like them to contact us about it! (I’m arden@80000hours.org & I can pass them on to the relevant people.)
We do also try to find evidence of negative impact, e.g. using our user survey, and it seems dramatically less common than the positive impact (see the stats above), though there are of course selection effects with that kind of method so one can’t take that at face value!
Regarding our advice on working at AI companies and whether this increases AI risk
This is a good worry and we talk a lot about this internally! We wrote about this here.- Apr 27, 2024, 10:42 PM; 53 points) 's comment on EA Meta Funding Landscape Report by (
Hey Matt,
I share several of the worries articulated in this post.
I think you’re wrong about how you characterise 80k’s strategic shift here, and want to try to correct the record on that point. I’m also going to give some concrete examples of things I’m currently doing, to illustrate a bit what I mean, & also include a few more personal comments.
(Context: I run the 80k web programme.)
Well put. And I agree that there are some concerning signs in this direction (though I’ve also had countervailing, inspiring experiences of AIS-focused people questioning whether some prevailing view about what to do in AIS is actually best for the world.)
I’d also love to see more cause prioritisation research. And it’s gonna be hard to both stay open enough to changing our minds about how to make the world better & to pursue our chosen means with enough focus to be effective. I think this challenge is fairly central to EA.
On 80k’s strategic shift:
You wrote:
How do we see the relationship between focusing on helping AGI go well and doing the most good?
It has always been the case that people and organisations need to find some intermediary outcome that comes before the good to point at strategically, some proxy for impact. Strategy is always about figuring out what’s gonna be the biggest/most cost-effective causal factor for that (i.e. means), & therefore the best proxy to pursue.
We used to focus on career changes not necessarily inside one specific cause area but it was still a proxy for the good. Now our proxy for the good is helping people work on making AGI go well, but our relationship to the good is the same as it was before: trying our best to point at it, trying to figure out the best means for doing so.
EA values & ideas are still a really important part of the strategy.
We wrote this in our post on the shift:
Though one might understandably worry that was paying lip service, just to reassure people. Let me talk about some internal recent goings-on off the top of my head, which hopefully do something to show we mean it:
1. Our internal doc on web programme strategy (i.e. the strategy for the programme I run) currently says that in order for our audience to actually have much more impact with their careers, engagement with the site ideally causes movement along at least 3[1] dimensions:
This makes re-designing the user flow post-strategic-shift a difficult balancing act/full of tradeoffs. How do we both quickly introduce people to AI being a big deal & urgent, and communicate EA ideas, plus help people shift their careers? Which do we do first?
We’re going to lose some simplicity (and some people, who don’t want to hear it) trying to do all this, and it will be reflected in the site being more complex than a strategy like “maximize for engagement or respectability” or “maximize for getting one idea across effectively” would recommend.
My view is that it’s worth it, because there is a danger of people just jumping into jobs that have “AI” or even “AI security/safety” in the name, without grappling with tough questions around what it actually means to help AGI go well or prioritising between options based on expected impact.
(On the term “EA mindset”—it’s really just a nickname; the thing I think we should care about is the focus on impact/use of the ideas.)
2. Our CEO (Niel Bowerman) spent several weeks recently with his top proactive priority helping figure out the top priorities within making AGI go well – i.e. which is more pressing (in the sense of where can additional talented people do the most marginal good) between issues like AI-enabled human coups, getting things right with rights and welfare of digital minds, and catastrophic misalignment. We argued about questions like “how big is the spread between issues within making AGI go well?” and “to what extent is AI rights and welfare an issue human has to get right before AI becomes incredibly powerful, due to potential lock-in effects of bad discourse or policies?”
So, we agree with this:
In other words, the analysis, as you say, is not done. It’s gonna be hecka hard to figure out “the particulars of what to do with AI.” And we do not “have it from here” – we need people thinking critically about this going forward so they stand the best chance of actually helping AGI go well, rather than just having a career in “something something AI.”
[1](I’m currently debating whether we should add a 4th: tactical sophistication about AI.)