Meta note: I wouldn’t normally write a comment like this. I don’t seriously consider 99.99% of charities when making my donations; why single out one? I’m writing anyway because comments so far are not engaging with my perspective, and I hope more detail can help 80,000 hours themselves and others engage better if they wish to do so. As I note at the end, they may quite reasonably not wish to do so.
For background, I was one of the people interviewed for this report, and in 2014-2018 my wife and I were one of 80,000 hours’ largest donors. In recent years it has not made my shortlist of donation options. The report’s characterisation of them—spending a huge amount while not clearly being >0 on the margin—is fairly close to my own view, though clearly I was not the only person to express it. All views expressed below are my own.
I think it is very clear that 80,000 hours have had a tremendous influence on the EA community. I cannot recall anyone stating otherwise, so references to things like the EA survey are not very relevant. But influence is not impact. I commonly hear two views for why this influence may not translate into positive impact:
-80,000 hours prioritises AI well above other cause areas. As a result they commonly push people off paths which are high-impact per other worldviews. So if you disagree with them about AI, you’re going to read things like their case studies and be pretty nonplussed. You’re also likely to have friends who have left very promising career paths because they were told they would do even more good in AI safety. This is my own position.
-80,000 hours is likely more responsible than any other single org for the many EA-influenced people working on AI capabilities. Many of the people who consider AI top priority are negative on this and thus on the org as a whole. This is not my own position, but I mention it because I think it helps explain why (some) people who are very pro-AI may decline to fund.
I suspect this unusual convergence may be why they got singled out; pretty much every meta org has funders skeptical of them for cause prioritisation reasons, but here there are many skeptics in the crowd broadly aligned on prioritisation.
Looping back to my own position, I would offer two ‘fake’ illustrative anecdotes:
Alice read Doing Good Better and was convinced of the merits of donating a moderate fraction of her income to effective charities. Later, she came across 80,000 hours and was convinced by their argument that her career was far more important. However, she found herself unable to take any of the recommended positions. As a result she neither donates nor works in what they would consider a high-impact role; it’s as if neither interaction had ever occurred, except perhaps she feels a bit down about her apparent uselessness.
Bob was having impact in a cause many EAs consider a top priority. But he is epistemically modest, and inclined to defer to the apparent EA consensus- communicated via 80,000 hours—that AI was more important. He switched careers and did find a role with solid—but worse—personal fit. The role is well-paid and engaging day-to-day; Bob sees little reason to reconsider the trade-off, especially since ChatGPT seems to have vindicated 80,000 hours’ prior belief that AI was going to be a big deal. But if pressed he would readily acknowledge that it’s not clear how his work actually improves things. In line with his broad policy on epistemics, he points out the EA leadership is very positive on his approach; who is he to disagree?
Alice and Bob have always been possible problems from my perspective. But in recent years I’ve met far more of them than I did when I was funding 80,000 hours. My circles could certainly be skewed here, but when there’s a lack of good data my approach to such situations is to base my own decisions on my own observations. If my circles are skewed, other people who are seeing very little of Alice and Bob can always choose to fund.
On that last note, I want to reiterate that I cannot think of a single org, meta or otherwise, that does not have its detractors. I suspect there may be some latent belief that an org as central as 80,000 hours has solid support across most EA funders. To the best of my knowledge this is not and has never been the case, for them or for anyone else. I do not think they should aim for that outcome, and I would encourage readers to update ~0 on learning such.
Just want to say here (since I work at 80k & commented abt our impact metrics & other concerns below) that I think it’s totally reasonable to:
Disagree with 80,000 Hours’s views on AI safety being so high priority, in which case you’ll disagree with a big chunk of the organisation’s strategy.
Disagree with 80k’s views on working in AI companies (which, tl;dr, is that it’s complicated and depends on the role and your own situation but is sometimes a good idea). I personally worry about this one a lot and think it really is possible we could be wrong here. It’s not obvious what the best thing to do here is, and we discuss this a bunch internally. But we think there’s risk in any approach to issue, and are going with our best guess based on talking to people in the field. (We reported on some of their views, some of which were basically ‘no don’t do it!’, here.)
Think that people should prioritise personal fit more than 80k causes them to. To be clear, we think (& 80k’s content emphasises) that personal fit matters a lot. But it’s possible we don’t push this hard enough. Also, because we think it’s not the only thing that matters for impact (& so also talk a lot about cause and intervention choice), we tend to present this as a set of considerations to navigate that involves some trade-offs. So It’s reasonable to think that 80k encourages too much trading off of personal fit, at least for some people.
Thanks Arden. I suspect you don’t disagree with the people interviewed for this report all that much then, though ultimately I can only speak for myself.
One possible disagreement that you and other commenters brought up that which I meant to respond to in my first comment, but forgot: I would not describe 80,000 hours as cause-neutral, as you try to do here and here. This seems to be an empirical disagreement, quoting from second link:
We are cause neutral[1] – we prioritise x-risk reduction because we think it’s most pressing, but it’s possible we could learn more that would make us change our priorities.
I don’t think that’s how it would go. If an individual 80,000 hours member learned things that cause them to downshift their x-risk or AI safety priority, I expect them to leave the org, not for the org to change. Similar observations on hiring. So while all the individuals involved may be cause neutral and open to change in the sense you describe, 80,000 hours itself is not, practically speaking. It’s very common for orgs to be more ‘sticky’ than their constituent employees in this way.
I appreciate it’s a weekend, and you should feel free to take your time to respond to this if indeed you respond at all. Sorry for missing it in the first round.
We do try to be open to changing our minds so that we can be cause neutral in the relevant sense, and we do change our cause rankings periodically and spend time and resources thinking about them (in fact we’re in the middle of thinking through some changes now). But how well set up are we, institutionally, to be able to in practice make changes as big as deprioritising risks from AI if we get good reasons to? I think this is a good question, and want to think about it more. So thanks!
I think it is very clear that 80,000 hours have had a tremendous influence on the EA community… so references to things like the EA survey are not very relevant. But influence is not impact… 80,000 hours prioritises AI well above other cause areas. As a result they commonly push people off paths which are high-impact per other worldviews.
Many of the things the EA Survey shows 80,000 Hours doing (e.g. introducing people to EA in the first place, helping people get more involved with EA, making people more likely to remain engaged with EA, introducing people to ideas and contacts that they think are important for their impact, helping people (by their own lights) have more impact), are things which supporters of a wide variety of worldviews and cause areas could view as valuable. Our data suggests that it is not only people who prioritise longtermist causes who are report these benefits from 80,000 Hours.
There is a significant overlap between EA and AI safety, and it is often unclear whether people supposedly working on AI safety are increasing/decreasing AI risk. So I think it would be helpful if you could point to some (recent) data on how many people are being introduced to global health and development, and animal welfare via 80,000 Hours.
We actually have a post on this forthcoming, but I can give you the figures for 80,000 Hours specifically now.
Global Poverty:
80,000 Hours:
37.3% rated Global Poverty 4⁄5
28.6% rated Global Poverty 5⁄5
All respondents:
39.3% rated Global Poverty 4⁄5
28.7% rated Global Poverty 5⁄5
So the difference is minimal, but this also neglects the fact that the scale of 80K’s recruitment swamps any differences in % supporting different causes. Since 80K recruits so many, it is still the second highest source of people who rate Global Poverty most highly (12.5% of such people) after only personal contacts.
Animal Welfare:
80,000 Hours:
26.0% rated Animal Welfare 4⁄5
9.6% rated Animal Welfare 5⁄5
All respondents:
30.1% rated Animal Welfare 4⁄5
11.8% rated Animal Welfare 5⁄5
Here the difference is slightly bigger, though 80,000 Hours remains among the top sources of recruits rating Animal Welfare highest (specifically, after personal contact, the top sources are ‘Other’ (11.2%), ‘Book, article or blog’ (10.7%), 80,000 Hours (9%).
These are numbers from the most recent full EA Survey (end of 2022), but they’re not limited only to people who joined EA in the most recent year. Slicing it by individual cohorts would reduce the sample size a lot.
My guess is that it would also increase the support for neartermist causes among all recruits (respondents tend to start out neartermist and become more longtermist over time).
That said, if we do look at people who joined EA in 2021 or later (the last 3 years seems decently recent to me, I don’t have the sense that 80K’s recruitment has changed so much in that time frame, n=1059), we see:
Bob sees little reason to reconsider the trade-off, especially since ChatGPT seems to have vindicated 80,000 hours’ prior belief that AI was going to be a big deal
ChatGPT is just the tip of the iceberg here.
GPT4 is significantly more powerful than 3.5. Google now has a multi-modal model that can take in sound, images and video and a context window of up to a million tokens. Sora can generate amazing realistic videos. And everyone is waiting to see what GPT5 can do.
Further, the Center for AI Safety open letter has demonstrated that it isn’t just our little community that is worried about these things, but a large number of AI experts.
Their ‘AI is going to be a big thing’ bet seems to have been a wise call, at least at the current point in time. Of course, I’m doing AI Safety movement building, so I’m a bit biased here, and maybe we’ll think differently down the line, but right now they’re clearly ahead.
Meta note: I wouldn’t normally write a comment like this. I don’t seriously consider 99.99% of charities when making my donations; why single out one? I’m writing anyway because comments so far are not engaging with my perspective, and I hope more detail can help 80,000 hours themselves and others engage better if they wish to do so. As I note at the end, they may quite reasonably not wish to do so.
For background, I was one of the people interviewed for this report, and in 2014-2018 my wife and I were one of 80,000 hours’ largest donors. In recent years it has not made my shortlist of donation options. The report’s characterisation of them—spending a huge amount while not clearly being >0 on the margin—is fairly close to my own view, though clearly I was not the only person to express it. All views expressed below are my own.
I think it is very clear that 80,000 hours have had a tremendous influence on the EA community. I cannot recall anyone stating otherwise, so references to things like the EA survey are not very relevant. But influence is not impact. I commonly hear two views for why this influence may not translate into positive impact:
-80,000 hours prioritises AI well above other cause areas. As a result they commonly push people off paths which are high-impact per other worldviews. So if you disagree with them about AI, you’re going to read things like their case studies and be pretty nonplussed. You’re also likely to have friends who have left very promising career paths because they were told they would do even more good in AI safety. This is my own position.
-80,000 hours is likely more responsible than any other single org for the many EA-influenced people working on AI capabilities. Many of the people who consider AI top priority are negative on this and thus on the org as a whole. This is not my own position, but I mention it because I think it helps explain why (some) people who are very pro-AI may decline to fund.
I suspect this unusual convergence may be why they got singled out; pretty much every meta org has funders skeptical of them for cause prioritisation reasons, but here there are many skeptics in the crowd broadly aligned on prioritisation.
Looping back to my own position, I would offer two ‘fake’ illustrative anecdotes:
Alice read Doing Good Better and was convinced of the merits of donating a moderate fraction of her income to effective charities. Later, she came across 80,000 hours and was convinced by their argument that her career was far more important. However, she found herself unable to take any of the recommended positions. As a result she neither donates nor works in what they would consider a high-impact role; it’s as if neither interaction had ever occurred, except perhaps she feels a bit down about her apparent uselessness.
Bob was having impact in a cause many EAs consider a top priority. But he is epistemically modest, and inclined to defer to the apparent EA consensus- communicated via 80,000 hours—that AI was more important. He switched careers and did find a role with solid—but worse—personal fit. The role is well-paid and engaging day-to-day; Bob sees little reason to reconsider the trade-off, especially since ChatGPT seems to have vindicated 80,000 hours’ prior belief that AI was going to be a big deal. But if pressed he would readily acknowledge that it’s not clear how his work actually improves things. In line with his broad policy on epistemics, he points out the EA leadership is very positive on his approach; who is he to disagree?
Alice and Bob have always been possible problems from my perspective. But in recent years I’ve met far more of them than I did when I was funding 80,000 hours. My circles could certainly be skewed here, but when there’s a lack of good data my approach to such situations is to base my own decisions on my own observations. If my circles are skewed, other people who are seeing very little of Alice and Bob can always choose to fund.
On that last note, I want to reiterate that I cannot think of a single org, meta or otherwise, that does not have its detractors. I suspect there may be some latent belief that an org as central as 80,000 hours has solid support across most EA funders. To the best of my knowledge this is not and has never been the case, for them or for anyone else. I do not think they should aim for that outcome, and I would encourage readers to update ~0 on learning such.
Just want to say here (since I work at 80k & commented abt our impact metrics & other concerns below) that I think it’s totally reasonable to:
Disagree with 80,000 Hours’s views on AI safety being so high priority, in which case you’ll disagree with a big chunk of the organisation’s strategy.
Disagree with 80k’s views on working in AI companies (which, tl;dr, is that it’s complicated and depends on the role and your own situation but is sometimes a good idea). I personally worry about this one a lot and think it really is possible we could be wrong here. It’s not obvious what the best thing to do here is, and we discuss this a bunch internally. But we think there’s risk in any approach to issue, and are going with our best guess based on talking to people in the field. (We reported on some of their views, some of which were basically ‘no don’t do it!’, here.)
Think that people should prioritise personal fit more than 80k causes them to. To be clear, we think (& 80k’s content emphasises) that personal fit matters a lot. But it’s possible we don’t push this hard enough. Also, because we think it’s not the only thing that matters for impact (& so also talk a lot about cause and intervention choice), we tend to present this as a set of considerations to navigate that involves some trade-offs. So It’s reasonable to think that 80k encourages too much trading off of personal fit, at least for some people.
Thanks Arden. I suspect you don’t disagree with the people interviewed for this report all that much then, though ultimately I can only speak for myself.
One possible disagreement that you and other commenters brought up that which I meant to respond to in my first comment, but forgot: I would not describe 80,000 hours as cause-neutral, as you try to do here and here. This seems to be an empirical disagreement, quoting from second link:
I don’t think that’s how it would go. If an individual 80,000 hours member learned things that cause them to downshift their x-risk or AI safety priority, I expect them to leave the org, not for the org to change. Similar observations on hiring. So while all the individuals involved may be cause neutral and open to change in the sense you describe, 80,000 hours itself is not, practically speaking. It’s very common for orgs to be more ‘sticky’ than their constituent employees in this way.
I appreciate it’s a weekend, and you should feel free to take your time to respond to this if indeed you respond at all. Sorry for missing it in the first round.
Speaking in a personal capacity here --
We do try to be open to changing our minds so that we can be cause neutral in the relevant sense, and we do change our cause rankings periodically and spend time and resources thinking about them (in fact we’re in the middle of thinking through some changes now). But how well set up are we, institutionally, to be able to in practice make changes as big as deprioritising risks from AI if we get good reasons to? I think this is a good question, and want to think about it more. So thanks!
Many of the things the EA Survey shows 80,000 Hours doing (e.g. introducing people to EA in the first place, helping people get more involved with EA, making people more likely to remain engaged with EA, introducing people to ideas and contacts that they think are important for their impact, helping people (by their own lights) have more impact), are things which supporters of a wide variety of worldviews and cause areas could view as valuable. Our data suggests that it is not only people who prioritise longtermist causes who are report these benefits from 80,000 Hours.
Hi David,
There is a significant overlap between EA and AI safety, and it is often unclear whether people supposedly working on AI safety are increasing/decreasing AI risk. So I think it would be helpful if you could point to some (recent) data on how many people are being introduced to global health and development, and animal welfare via 80,000 Hours.
Thanks Vasco.
We actually have a post on this forthcoming, but I can give you the figures for 80,000 Hours specifically now.
Global Poverty:
80,000 Hours:
37.3% rated Global Poverty 4⁄5
28.6% rated Global Poverty 5⁄5
All respondents:
39.3% rated Global Poverty 4⁄5
28.7% rated Global Poverty 5⁄5
So the difference is minimal, but this also neglects the fact that the scale of 80K’s recruitment swamps any differences in % supporting different causes. Since 80K recruits so many, it is still the second highest source of people who rate Global Poverty most highly (12.5% of such people) after only personal contacts.
Animal Welfare:
80,000 Hours:
26.0% rated Animal Welfare 4⁄5
9.6% rated Animal Welfare 5⁄5
All respondents:
30.1% rated Animal Welfare 4⁄5
11.8% rated Animal Welfare 5⁄5
Here the difference is slightly bigger, though 80,000 Hours remains among the top sources of recruits rating Animal Welfare highest (specifically, after personal contact, the top sources are ‘Other’ (11.2%), ‘Book, article or blog’ (10.7%), 80,000 Hours (9%).
Thanks, David! Strongly upvoted.
To clarify, are those numbers relative to the people who got to know about EA in 2023 (via 80,000 Hours or any source)?
Thanks Vasco!
These are numbers from the most recent full EA Survey (end of 2022), but they’re not limited only to people who joined EA in the most recent year. Slicing it by individual cohorts would reduce the sample size a lot.
My guess is that it would also increase the support for neartermist causes among all recruits (respondents tend to start out neartermist and become more longtermist over time).
That said, if we do look at people who joined EA in 2021 or later (the last 3 years seems decently recent to me, I don’t have the sense that 80K’s recruitment has changed so much in that time frame, n=1059), we see:
Global Poverty
80,000 Hours
35.8% rated Global Poverty 4⁄5
34.2% rated Global Poverty 5⁄5
All respondents
40.2% rated Global Poverty 4⁄5
33.2% rated Global Poverty 4⁄5
Animal Welfare
80,000 Hours
26.8% rated Animal Welfare 4⁄5
8.9% rated Animal Welfare 5⁄5
All respondents
29.7% rated Animal Welfare 4⁄5
13.7% rated Animal Welfare 5⁄5
ChatGPT is just the tip of the iceberg here.
GPT4 is significantly more powerful than 3.5. Google now has a multi-modal model that can take in sound, images and video and a context window of up to a million tokens. Sora can generate amazing realistic videos. And everyone is waiting to see what GPT5 can do.
Further, the Center for AI Safety open letter has demonstrated that it isn’t just our little community that is worried about these things, but a large number of AI experts.
Their ‘AI is going to be a big thing’ bet seems to have been a wise call, at least at the current point in time. Of course, I’m doing AI Safety movement building, so I’m a bit biased here, and maybe we’ll think differently down the line, but right now they’re clearly ahead.