Hey, Arden from 80,000 Hours here –
I haven’t read the full report, but given the time sensitivity with commenting on forum posts, I wanted to quickly provide some information relevant to some of the 80k mentions in the qualitative comments, which were flagged to me.
Regarding whether we have public measures of our impact & what they show
It is indeed hard to measure how much our programmes counterfactually help move talent to high impact causes in a way that increases global welfare, but we do try to do this.
From the 2022 report the relevant section is here. Copying it in as there are a bunch of links.
We primarily use six sources of data to assess our impact:
The 80,000 Hours user survey. A summary of the 2022 user survey is linked in the appendix.
Our in-depth case study analyses, which produce our top plan changes and DIPY estimates (last analysed in 2020).
Our own data about how users interact with our services (e.g. our historical metrics linked in the appendix).
Our and others’ impressions of the quality of our visible output.
Overall, we’d guess that 80,000 Hours continued to see diminishing returns to its impact per staff member per year. [But we continue to think it’s still cost-effective, even as it grows.]
Some elaboration:
DIPY estimates are our measure of contractual career plan shifts we think will be positive for the world. Unfortunately it’s hard to get an accurate read on counterfactuals and response rates, so these are only very rough estimates & we don’t put that much weight on them.
We report on things like engagement time & job board clicks as *lead metrics* because we think they tend to flow through to counterfactual high impact plan changes, & we’re able to measure them much more readily.
Headlines from some of the links above:
From our own survey (2138 respondents):
On the overall social impact that 80,000 Hours had on their career or career plans,
1021 (50%) said 80,000 Hours increased their impact
Within this we identified 266 who reported >30% chance of 80,000 Hours causing them to taking a new jobs or graduate course (a “criteria based plan change”)
26 (1%) said 80,000 Hours reduced their impact.
Themes in answers were demoralisation and causing career choices that were a poor fit
Open Philanthropy’s EA/LT survey was aimed at asking their respondents ““What was important in your journey towards longtermist priority work?” – it has a lot of different results and feels hard to summarise, but it showed a big chunk of people considered 80k a factor in ending up working where they are.
The 2020 EA survey link says “More than half (50.7%) of respondents cited 80,000 Hours as important for them getting involved in EA”. (2022 says something similar)
Regarding the extent to which we are cause neutral & whether we’ve been misleading about this
We do strive to be cause neutral, in the sense that we try to prioritize working on the issues where we think we can have the highest marginal impact (rather than committing to a particular cause for other reasons).
For the past several years we’ve thought that the most pressing problem is AI safety, so have put much of our effort there (Some 80k programmes focus on it more than others – I reckon for some it’s a majority, but it hasn’t been true that as an org we “almost exclusively focus on AI risk.” (a bit more on that here.))
In other words, we’re cause neutral, but not cause *agnostic* - we have a view about what’s most pressing. (Of course we could be wrong or thinking about this badly, but I take that to be a different concern.)
The most prominent place we describe our problem prioritization is our problem profiles page – which is one of our most popular pages. We describe our list of issues this way: “These areas are ranked roughly by our guess at the expected impact of an additional person working on them, assuming your ability to contribute to solving each is similar (though there’s a lot of variation in the impact of work within each issue as well). (Here’s also a past comment from me on a related issue.)
Regarding the concern about us harming talented EAs by causing them to choose bad early career jobs
To the extent that this has happened this is quite serious – helping talented people have higher impact careers is our entire point! I think we will always sometimes fail to give good advice (given the diversity & complexity of people’s situations & the world), but we do try to aggressively minimise negative impacts, and if people think any particular part of our advice is unhelpful, we’d like them to contact us about it! (I’m arden@80000hours.org & I can pass them on to the relevant people.)
We do also try to find evidence of negative impact, e.g. using our user survey, and it seems dramatically less common than the positive impact (see the stats above), though there are of course selection effects with that kind of method so one can’t take that at face value!
Regarding our advice on working at AI companies and whether this increases AI risk
This is a good worry and we talk a lot about this internally! We wrote about this here.
Just want to say here (since I work at 80k & commented abt our impact metrics & other concerns below) that I think it’s totally reasonable to:
Disagree with 80,000 Hours’s views on AI safety being so high priority, in which case you’ll disagree with a big chunk of the organisation’s strategy.
Disagree with 80k’s views on working in AI companies (which, tl;dr, is that it’s complicated and depends on the role and your own situation but is sometimes a good idea). I personally worry about this one a lot and think it really is possible we could be wrong here. It’s not obvious what the best thing to do here is, and we discuss this a bunch internally. But we think there’s risk in any approach to issue, and are going with our best guess based on talking to people in the field. (We reported on some of their views, some of which were basically ‘no don’t do it!’, here.)
Think that people should prioritise personal fit more than 80k causes them to. To be clear, we think (& 80k’s content emphasises) that personal fit matters a lot. But it’s possible we don’t push this hard enough. Also, because we think it’s not the only thing that matters for impact (& so also talk a lot about cause and intervention choice), we tend to present this as a set of considerations to navigate that involves some trade-offs. So It’s reasonable to think that 80k encourages too much trading off of personal fit, at least for some people.