First I think it’s a bit insensitive that a huge leading org like this would write such a significant post with almost no recognition that this decision is likely to hurt and alienate some people. It’s unfortunate that the post is written in a warm and upbeat tone yet is largely bereft of emotional intelligence and recognition of potential harms of this decision. I’m sure this is unintentional but it still feels tone deaf. Why not acknowledge the potential emotional and community significance of this decision, and be a bit more humble in general? Something like...
“We realise this decision could be seen as sidelining the importance of many people’s work and could hurt or confuse some people. We encourage you to keep working on what you believe is most important and we realize even after much painstaking thought we’re still quite likely to be wrong here.′
I also struggle to understand how this is the best strategy as an onramp for people to EA—assuming that is still part of the purpose of 80k. Yes there are other orgs which do career advising and direction, but that are still minnows compared with you. Even if you’re sole goal is to get as many people into AI work as possible, I think you coud well achieve that better through helping people understand worldview diversification and helping them make up their own mind, while keeping of course a heavy focus on AI safety and clearly having that as your no 1 cause.
It could also feel like kick in the teeth to the huge numbers of people who are committed to EA principles, working in animal welfare and global health and who are skeptical about the value AI safety work for a range of reasons whether its EAs sketchy record to date, tractability or just very different AGI timelines. Again a bit more humility might have have softened the blow here.
Why not just keep AI Safety as your main cause area while still having some diversification at least? I get that you’re making a bet, but I think it’s an unnecessary one, both for the togetherness and growth of the EA community in general and possibly even if your sole metric is attracting more good people to work on making the AI trajectory better
You also put many of us in a potentially awkward position of disagreeing with the position one of the top 3 or so EA orgs, a position I haven’t been in before. If anyone asked me a week ago what I thought of 80,000 hours, I would say something like, “they’re a great organization who helps you think about how to do the most good possible with your life. Personally I think they have a bit too much focus on AI risk but they are an incredible resource for anyone thinking about what to do with their future so check them out”
Now I’m not sure what I’ll say but it’s hard not to be honest and say I disagree with 80ks sole focus on AI and point people somewhere else, which doesn’t feel great for the “big EA tent” or bolstering “EA as an idea”
Despite all this yes you might be right that sidelining many people and their work and risking splintering the community on some level might be worth it for the good of AI safety, but boy is that some bet to make.
I’m really sorry this post made you sad and confused. I think that’s an understandable reaction, and I wish I had done more to mitigate the hurt this update could cause. As someone who came into EA via global health, I personally very much value the work that you and others are doing on causes such as global development and factory farming.
A couple comments on other parts of your post in case it’s helpful:
I also struggle to understand how this is the best strategy as an onramp for people to EA—assuming that is still part of the purpose of 80k. Yes there are other orgs which do career advising and direction, but that are still minnows compared with you. Even if you’re sole goal is to get as many people into AI work as possible, I think you coud well achieve that better through helping people understand worldview diversification and helping them make up their own mind, while keeping of course a heavy focus on AI safety and clearly having that as your no 1 cause.
Our purpose is not to get people into EA, but to help solve the world’s most pressing problems. I think the EA community and EA values are still a big part of that. (Arden has written more on 80k’s relationship to the EA community here.) But I also think the world has changed a lot and will change even more in the near future, and it would be surprising if 80k’s best path to impact didn’t change as well. I think focusing our ongoing efforts more on making the development of AGI go well is our best path to impact, building on what 80k has created over time.
But I might be wrong about this, and I think it’s reasonable that others disagree.
I don’t expect the whole EA community to take the same approach. CEA has said it wants to take a “principles-first approach”, rather than focusing more on AI as we will (though to be clear, our focus is driven by our principles, and we want to still communicate that clearly).
I think open communication about what different orgs are prioritising and why is really vital for coordination and to avoid single-player thinking. My hope is that people in the EA community can do this without making others with different cause prio feel bad about their disagreements or differences in strategy. I certainly don’t want anyone doing work in global health or animal welfare to feel bad about their work because of our conclusions about where our efforts are best focused — I am incredibly grateful for the work they do.
boy is that some bet to make.
Unfortunately I think that all the options in this space involve taking bets in an important way. We also think that it’s costly if users come to our site and don’t quickly understand that we think the current AI situation deserves societal urgency.
On the other costs that you mention in your post, I think I see them as less stark than you do. Quoting Cody’s response to Rocky above: > We still plan to have our career guide up as a key piece of content, which has been a valuable resource to many people; it explains our views on AI, but also guides people through thinking about cause prioritisation for themselves. And as the post notes, we plan to publish and promote a version of the career guide with a professional publisher in the near future. At the same time, for many years 80k has also made it clear that we prioritise risks from AI as the world’s most pressing problem. So I don’t think I see this as clearly a break from the past as you might.
I also want to thank you for sharing your concerns, which I realise can be hard to do. But it’s really helpful for us to know how people are honestly reacting to what we do.
Thanks for the thoughtful reply really appreciate this. To have the CEO of an org replying to comments is refreshing and I actually think an excellent use of a few hours of time.
”I certainly don’t want anyone doing work in global health or animal welfare to feel bad about their work because of our conclusions about where our efforts are best focused — I am incredibly grateful for the work they do”. This is fantastic to hear and makes a big difference to hear, thanks for this.
“Our purpose is not to get people into EA, but to help solve the world’s most pressing problems.”—This might be your purpose, but the reality is that 80,000 hours plays an enormous role in getting peopl einto EA.
Losing some (or a lot) of this impact could have been recognised as a potential large (perhaps the largest) tradeoff with the new direction. What probably hit me most about the announcement was the seeming lack of recognition of the potentially most important tradeoffs—it makes it seem like the tradeoffs haven’t been considered when I’m sure they have.
You’re right that we make bets whatever we do or don’t do.
Sorry to hear you found this saddening and confusing :/
Just to share another perspective: To me, the post did not come across as insensitive. I found the tone clear and sober, as I’m used to from 80k content, and I appreciated the explicit mention that there might now be space for another org to cover other cause areas like bio or nuclear.
These trade-offs are always difficult, but as any EA org, 80k should do what they consider highest expected impact overall rather than what’s best for the EA community, and I’m glad they’re doing that.
What confused/saddened me wasn’t so much their reasons the change, but why they didn’t address perhaps the 3-5 biggest potential objections / downsides / trade offs to the decision. They even had a section “What does this mean for non-AI cause areas?” without stating the most important things that this means for non-AI cause areas, which include
1. Members the current community feeling left out/frustrated because for the first time they are no longer aligned with / no longer served by a top EA organisation 2. (From ZDGroff) “Organizations like 80,000 Hours set the tone for the community, and I think there’s good rule-of-thumb reasons to think focusing on one issue is a mistake. As 80K’s problem profile on factory farming says, factory farming may be the greatest moral mistake humanity is currently making, and it’s good to put some weight on rules of thumb in addition to expectations.” 3. The risk of narrowing the funnel into EA as less people will be attracted to a narrower AI focus (mentioned a few times). This seems like a pretty serious issue to not address, given that 80k (like it or not) is an EA front page
Just because 80k doesn’t necessarily have these issues as their top goal, doesnt’ mean these issues don’t exist. I sense a bit of “Ostrich” mindset. I’ve heard a couple of times that they aren’t aiming to be an onramp to EA, but that doesn’t stop them from being one of the main Onramps evidenced by studies that have asked people how they got into EA....
I think the tone of the post is somewhat tone deaf and could easily have been mitigated with some simple soft and caring language, such as “we realise that some people may feel...”, and “This could make it harder for....”. Maybe that’s not the tone 80k normally take, but I think that’s a nicer way to operate which costs you basically nothing.
I’m a little sad and confused about this.
First I think it’s a bit insensitive that a huge leading org like this would write such a significant post with almost no recognition that this decision is likely to hurt and alienate some people. It’s unfortunate that the post is written in a warm and upbeat tone yet is largely bereft of emotional intelligence and recognition of potential harms of this decision. I’m sure this is unintentional but it still feels tone deaf. Why not acknowledge the potential emotional and community significance of this decision, and be a bit more humble in general? Something like...
“We realise this decision could be seen as sidelining the importance of many people’s work and could hurt or confuse some people. We encourage you to keep working on what you believe is most important and we realize even after much painstaking thought we’re still quite likely to be wrong here.′
I also struggle to understand how this is the best strategy as an onramp for people to EA—assuming that is still part of the purpose of 80k. Yes there are other orgs which do career advising and direction, but that are still minnows compared with you. Even if you’re sole goal is to get as many people into AI work as possible, I think you coud well achieve that better through helping people understand worldview diversification and helping them make up their own mind, while keeping of course a heavy focus on AI safety and clearly having that as your no 1 cause.
It could also feel like kick in the teeth to the huge numbers of people who are committed to EA principles, working in animal welfare and global health and who are skeptical about the value AI safety work for a range of reasons whether its EAs sketchy record to date, tractability or just very different AGI timelines. Again a bit more humility might have have softened the blow here.
Why not just keep AI Safety as your main cause area while still having some diversification at least? I get that you’re making a bet, but I think it’s an unnecessary one, both for the togetherness and growth of the EA community in general and possibly even if your sole metric is attracting more good people to work on making the AI trajectory better
You also put many of us in a potentially awkward position of disagreeing with the position one of the top 3 or so EA orgs, a position I haven’t been in before. If anyone asked me a week ago what I thought of 80,000 hours, I would say something like, “they’re a great organization who helps you think about how to do the most good possible with your life. Personally I think they have a bit too much focus on AI risk but they are an incredible resource for anyone thinking about what to do with their future so check them out”
Now I’m not sure what I’ll say but it’s hard not to be honest and say I disagree with 80ks sole focus on AI and point people somewhere else, which doesn’t feel great for the “big EA tent” or bolstering “EA as an idea”
Despite all this yes you might be right that sidelining many people and their work and risking splintering the community on some level might be worth it for the good of AI safety, but boy is that some bet to make.
I’m really sorry this post made you sad and confused. I think that’s an understandable reaction, and I wish I had done more to mitigate the hurt this update could cause. As someone who came into EA via global health, I personally very much value the work that you and others are doing on causes such as global development and factory farming.
A couple comments on other parts of your post in case it’s helpful:
Our purpose is not to get people into EA, but to help solve the world’s most pressing problems. I think the EA community and EA values are still a big part of that. (Arden has written more on 80k’s relationship to the EA community here.) But I also think the world has changed a lot and will change even more in the near future, and it would be surprising if 80k’s best path to impact didn’t change as well. I think focusing our ongoing efforts more on making the development of AGI go well is our best path to impact, building on what 80k has created over time.
But I might be wrong about this, and I think it’s reasonable that others disagree.
I don’t expect the whole EA community to take the same approach. CEA has said it wants to take a “principles-first approach”, rather than focusing more on AI as we will (though to be clear, our focus is driven by our principles, and we want to still communicate that clearly).
I think open communication about what different orgs are prioritising and why is really vital for coordination and to avoid single-player thinking. My hope is that people in the EA community can do this without making others with different cause prio feel bad about their disagreements or differences in strategy. I certainly don’t want anyone doing work in global health or animal welfare to feel bad about their work because of our conclusions about where our efforts are best focused — I am incredibly grateful for the work they do.
Unfortunately I think that all the options in this space involve taking bets in an important way. We also think that it’s costly if users come to our site and don’t quickly understand that we think the current AI situation deserves societal urgency.
On the other costs that you mention in your post, I think I see them as less stark than you do. Quoting Cody’s response to Rocky above:
> We still plan to have our career guide up as a key piece of content, which has been a valuable resource to many people; it explains our views on AI, but also guides people through thinking about cause prioritisation for themselves. And as the post notes, we plan to publish and promote a version of the career guide with a professional publisher in the near future. At the same time, for many years 80k has also made it clear that we prioritise risks from AI as the world’s most pressing problem. So I don’t think I see this as clearly a break from the past as you might.
I also want to thank you for sharing your concerns, which I realise can be hard to do. But it’s really helpful for us to know how people are honestly reacting to what we do.
Thanks for the thoughtful reply really appreciate this. To have the CEO of an org replying to comments is refreshing and I actually think an excellent use of a few hours of time.
”I certainly don’t want anyone doing work in global health or animal welfare to feel bad about their work because of our conclusions about where our efforts are best focused — I am incredibly grateful for the work they do”. This is fantastic to hear and makes a big difference to hear, thanks for this.
“Our purpose is not to get people into EA, but to help solve the world’s most pressing problems.”—This might be your purpose, but the reality is that 80,000 hours plays an enormous role in getting peopl einto EA.
Losing some (or a lot) of this impact could have been recognised as a potential large (perhaps the largest) tradeoff with the new direction. What probably hit me most about the announcement was the seeming lack of recognition of the potentially most important tradeoffs—it makes it seem like the tradeoffs haven’t been considered when I’m sure they have.
You’re right that we make bets whatever we do or don’t do.
Thanks again for the reply!
Sorry to hear you found this saddening and confusing :/
Just to share another perspective: To me, the post did not come across as insensitive. I found the tone clear and sober, as I’m used to from 80k content, and I appreciated the explicit mention that there might now be space for another org to cover other cause areas like bio or nuclear.
These trade-offs are always difficult, but as any EA org, 80k should do what they consider highest expected impact overall rather than what’s best for the EA community, and I’m glad they’re doing that.
What confused/saddened me wasn’t so much their reasons the change, but why they didn’t address perhaps the 3-5 biggest potential objections / downsides / trade offs to the decision. They even had a section “What does this mean for non-AI cause areas?” without stating the most important things that this means for non-AI cause areas, which include
1. Members the current community feeling left out/frustrated because for the first time they are no longer aligned with / no longer served by a top EA organisation
2. (From ZDGroff) “Organizations like 80,000 Hours set the tone for the community, and I think there’s good rule-of-thumb reasons to think focusing on one issue is a mistake. As 80K’s problem profile on factory farming says, factory farming may be the greatest moral mistake humanity is currently making, and it’s good to put some weight on rules of thumb in addition to expectations.”
3. The risk of narrowing the funnel into EA as less people will be attracted to a narrower AI focus (mentioned a few times). This seems like a pretty serious issue to not address, given that 80k (like it or not) is an EA front page
Just because 80k doesn’t necessarily have these issues as their top goal, doesnt’ mean these issues don’t exist. I sense a bit of “Ostrich” mindset. I’ve heard a couple of times that they aren’t aiming to be an onramp to EA, but that doesn’t stop them from being one of the main Onramps evidenced by studies that have asked people how they got into EA....
I think the tone of the post is somewhat tone deaf and could easily have been mitigated with some simple soft and caring language, such as “we realise that some people may feel...”, and “This could make it harder for....”. Maybe that’s not the tone 80k normally take, but I think that’s a nicer way to operate which costs you basically nothing.