Hey Nick, just wanted to say thanks for this suggestion. We were trying to balance keeping the post succinct, but in retrospect I would have liked to have included more of the mood of Conor’s comment here without losing the urgency of the original post. I too hate that this is the timeline we’re in.
Niel_Bowerman
I’m really sorry this post made you sad and confused. I think that’s an understandable reaction, and I wish I had done more to mitigate the hurt this update could cause. As someone who came into EA via global health, I personally very much value the work that you and others are doing on causes such as global development and factory farming.
A couple comments on other parts of your post in case it’s helpful:
I also struggle to understand how this is the best strategy as an onramp for people to EA—assuming that is still part of the purpose of 80k. Yes there are other orgs which do career advising and direction, but that are still minnows compared with you. Even if you’re sole goal is to get as many people into AI work as possible, I think you coud well achieve that better through helping people understand worldview diversification and helping them make up their own mind, while keeping of course a heavy focus on AI safety and clearly having that as your no 1 cause.
Our purpose is not to get people into EA, but to help solve the world’s most pressing problems. I think the EA community and EA values are still a big part of that. (Arden has written more on 80k’s relationship to the EA community here.) But I also think the world has changed a lot and will change even more in the near future, and it would be surprising if 80k’s best path to impact didn’t change as well. I think focusing our ongoing efforts more on making the development of AGI go well is our best path to impact, building on what 80k has created over time.
But I might be wrong about this, and I think it’s reasonable that others disagree.
I don’t expect the whole EA community to take the same approach. CEA has said it wants to take a “principles-first approach”, rather than focusing more on AI as we will (though to be clear, our focus is driven by our principles, and we want to still communicate that clearly).
I think open communication about what different orgs are prioritising and why is really vital for coordination and to avoid single-player thinking. My hope is that people in the EA community can do this without making others with different cause prio feel bad about their disagreements or differences in strategy. I certainly don’t want anyone doing work in global health or animal welfare to feel bad about their work because of our conclusions about where our efforts are best focused — I am incredibly grateful for the work they do.
boy is that some bet to make.
Unfortunately I think that all the options in this space involve taking bets in an important way. We also think that it’s costly if users come to our site and don’t quickly understand that we think the current AI situation deserves societal urgency.
On the other costs that you mention in your post, I think I see them as less stark than you do. Quoting Cody’s response to Rocky above:
> We still plan to have our career guide up as a key piece of content, which has been a valuable resource to many people; it explains our views on AI, but also guides people through thinking about cause prioritisation for themselves. And as the post notes, we plan to publish and promote a version of the career guide with a professional publisher in the near future. At the same time, for many years 80k has also made it clear that we prioritise risks from AI as the world’s most pressing problem. So I don’t think I see this as clearly a break from the past as you might.I also want to thank you for sharing your concerns, which I realise can be hard to do. But it’s really helpful for us to know how people are honestly reacting to what we do.
Thanks David. I agree that the Metaculus question is a mediocre proxy for AGI, for the reasons you say. We included it primarily because it shows the magnitude of the AI timelines update that we and others have made over the past few years.
In case it’s helpful context, here are two footnotes that I included in the strategy document that this post is based on, but that we cut for brevity in this EA Forum version:
We define AGI using the Morris, et al./Deepmind (2024) definition (see table 1) of “competent AGI” for the purposes of this document: an AI system that performs as well as at least 50% of skilled adults at a wide range of non-physical tasks, including metacognitive tasks like learning new skills.
This Deepmind definition of AGI is the one that we primarily use internally. I think that we may get strategically significant AI capabilities before this though, for example via automated AI R&D.
On the Metaculus definition, I included this footnote:
The headline Metaculus forecast on AGI doesn’t fully line up with the Morris, et al. (2024) definition of AGI that we use in footnote 2. For example, the Metaculus definition includes robotic capabilities, and doesn’t include being able to successfully do long-term planning and execution loops. But nonetheless I think this is the closest proxy for an AGI timeline that I’ve found on a public prediction market.
Hey Greg! I personally appreciate that you and others are thinking hard about the viability of giving us more time to solve the challenges that I expect we’ll encounter as we transition to a world with powerful AI systems. Due to capacity constraints, I won’t be able to discuss the pros and cons of pausing right now. But as a brief sketch of my current personal view: I agree it’d be really useful to have more time to solve the challenges associated with navigating the transition to a world with AGI, all else equal. However, I’m relatively more excited than you about other strategies to reduce the risks of AGI, because I’m worried about the tractability of a (really effective) pause. I’d also guess my P(doom) is lower than yours.
80,000 Hours is shifting its strategic approach to focus more on AGI
Hey John, unfortunately a lot of the data we use to assess our impact contains people’s personal details or comes from others’ analyses that we’re not able to share. As such, it is hard for me to give a sense of how many times more cost-effective we think our marginal spending is compared with the community funding bar.
But the original post includes various details about assessments of our impact, including the plan changes we’ve tracked, placements made, the EA survey, and the Open Philanthropy survey. We will be working on our annual review in spring 2024 and may have more details to share about the impact of our programmes then.
If you are interested in reading about our perspective on our historical cost-effectiveness from our 2019 annual review, you can do so here.
Thanks for the question. To be clear, we do think growing the team will significantly increase our impact in expectation.
We do see diminishing returns on several areas of investment, but having diminishing returns is consistent with significantly increasing impact.
Not all of our impact is captured in these metrics. For example, if we were to hire to increase the quality of our written advice even while maintaining the same number of website engagement hours, we’d expect our impact to increase (though this is of course hard to measure).
In our view, investments in 80k’s growth are still well above the cost-effectiveness bar for similar types of organisations and interventions in the problem areas we work on.
a new career service org that caters to the other cause priorities of EA?
I’m guessing you are familiar with Probably Good? They are doing almost exactly the thing that you describe here. They are also accepting donations, and if you want to support them you can do so here.
Thanks for engaging with this post! A few thoughts prompted by your comment in case they are helpful:
80k has been interested in longtermism-related causes for many years, including many years in which we’ve seen a lot of growth. We were interested in longtermism for several years before we received our first grant from Open Philanthropy.
We believe there’s still a lot of need for talent in the problems areas that we focus on, so we don’t think there’s a strong reason for us to shift our focus on that front — at least for the time being.
In evaluating our impact, you should consider whether the causes where we focus seem most pressing to you. If you think our focus areas are not that pressing, we think it’s reasonable to be less interested in donating to us.
We’re happy to see others offering alternatives to our career advice — this kind of competition is healthy and we are keen to encourage it in the ecosystem.
All that said, we do have a lot of advice to people who are not that interested in longtermism. For example, our job board features opportunities for people working on global health and animal issues, and our career guide offers advice that is widely applicable, including about how readers could approach thinking through the question of which problems are most pressing for themselves.
Hey George —thanks for the question!
We haven’t done a full annual review of 2023 and the complete data isn’t in yet, so we haven’t done a thorough assessment of the answer to your question yet. The answers to your question probably differ quite a bit programme to programme. But here are a few thoughts that seemed relevant to me:
On web:
Over the past couple of years, the biggest predictor of change in web engagement time appears to be changes in our marketing spending. In 2022 we substantially increased our marketing spend. In 2023 our marketing spend was not dramatically larger than in 2022. This is reflected in the web engagement time metrics. (We are actively investigating the cost-effectiveness of marginal marketing spending, and are not fundraising for marketing as part of this public fundraising round as it is already being covered by Open Philanthropy.)
We have also put more effort into driving off-site engagement time in 2023, e.g. via our AI video, improvements to our newsletter, etc. This is not included in the engagement time metrics in the chart, but we estimate that in 2023 we grew off-site engagement time notably more than we did on-site engagement time.
On podcast:
The drivers of engagement with the podcast are more mysterious to me, and I have trouble making accurate predictions of future engagement time with the podcast. Viewed on a quarterly basis, growth in the podcast appears to be healthy.
On advising:In 2023 we focused more on growing and systematising headhunting, active outreach and systems, and relatively less on increasing call numbers.
We didn’t make as many calls as we had hoped to, due in part to a manager on the team leaving.
We also put relatively more focus on improving call quality, for example by putting in place feedback systems. This was a focus because we grew the team in 2021 and 2022 and wanted more systems to keep everyone in sync and ensure continued quality.
On job board:
We’ve actually reduced our FTE input into the job board in 2023, but we are still seeing solid quarter-on-quarter growth.
Additional points:
Some of our staff growth came from hires to our internal systems team, which should strengthen our capacity over time but won’t result in direct improvements on these metrics.
We do expect some diminishing returns to staff growth over time. I’ll address this in another comment on this thread.
Yeah, Rethink Priorities, and yeah he was just wrong, which confused me. To be clear, I don’t think this was his fault, I asked the question in a kind of leading way, and he responded very quickly, and so I model this more as an unfortunate miscommunication.
Confirming that I was wrong about this in my communication with Oli. Also agreeing with Oli here on the context in which those comments were made.
I have made a note in my reflective journal entry on this event to be more careful with my comms in circumstances such as this one.- Dec 19, 2023, 5:36 PM; 5 points) 's comment on Effective Aspersions: How the Nonlinear Investigation Went Wrong by (
My understanding is that this refers to the combined engagement time reported across Spotify, Apple and Google.
80,000 Hours spin out announcement and fundraising
If summaries are editable, it could be nice to keep the same length limit so that they don’t balloon during editing.
Here to help! 😛
I’m guessing a secondment is not a common term in the US?
80,000 Hours wants to see more people trying out recruiting
What is their level of familiarity with machine learning and/or computer science?
Thanks for doing this—I found it helpful!
Am I correct in thinking that under ‘Among all respondents’ under ‘Average usefulness ratings:’ the category
> 80k: 2.6 +/- 0.1
is just the 80k podcast and not all of 80k? If so one could change it to:
> 80k podcast: 2.6 +/- 0.1
I haven’t read it, but Zershaaneh Qureshi at Convergence Analysis wrote a recent report on pathways to short timelines.