I’m really sorry this post made you sad and confused. I think that’s an understandable reaction, and I wish I had done more to mitigate the hurt this update could cause. As someone who came into EA via global health, I personally very much value the work that you and others are doing on causes such as global development and factory farming.
A couple comments on other parts of your post in case it’s helpful:
I also struggle to understand how this is the best strategy as an onramp for people to EA—assuming that is still part of the purpose of 80k. Yes there are other orgs which do career advising and direction, but that are still minnows compared with you. Even if you’re sole goal is to get as many people into AI work as possible, I think you coud well achieve that better through helping people understand worldview diversification and helping them make up their own mind, while keeping of course a heavy focus on AI safety and clearly having that as your no 1 cause.
Our purpose is not to get people into EA, but to help solve the world’s most pressing problems. I think the EA community and EA values are still a big part of that. (Arden has written more on 80k’s relationship to the EA community here.) But I also think the world has changed a lot and will change even more in the near future, and it would be surprising if 80k’s best path to impact didn’t change as well. I think focusing our ongoing efforts more on making the development of AGI go well is our best path to impact, building on what 80k has created over time.
But I might be wrong about this, and I think it’s reasonable that others disagree.
I don’t expect the whole EA community to take the same approach. CEA has said it wants to take a “principles-first approach”, rather than focusing more on AI as we will (though to be clear, our focus is driven by our principles, and we want to still communicate that clearly).
I think open communication about what different orgs are prioritising and why is really vital for coordination and to avoid single-player thinking. My hope is that people in the EA community can do this without making others with different cause prio feel bad about their disagreements or differences in strategy. I certainly don’t want anyone doing work in global health or animal welfare to feel bad about their work because of our conclusions about where our efforts are best focused — I am incredibly grateful for the work they do.
boy is that some bet to make.
Unfortunately I think that all the options in this space involve taking bets in an important way. We also think that it’s costly if users come to our site and don’t quickly understand that we think the current AI situation deserves societal urgency.
On the other costs that you mention in your post, I think I see them as less stark than you do. Quoting Cody’s response to Rocky above: > We still plan to have our career guide up as a key piece of content, which has been a valuable resource to many people; it explains our views on AI, but also guides people through thinking about cause prioritisation for themselves. And as the post notes, we plan to publish and promote a version of the career guide with a professional publisher in the near future. At the same time, for many years 80k has also made it clear that we prioritise risks from AI as the world’s most pressing problem. So I don’t think I see this as clearly a break from the past as you might.
I also want to thank you for sharing your concerns, which I realise can be hard to do. But it’s really helpful for us to know how people are honestly reacting to what we do.
Thanks for the thoughtful reply really appreciate this. To have the CEO of an org replying to comments is refreshing and I actually think an excellent use of a few hours of time.
”I certainly don’t want anyone doing work in global health or animal welfare to feel bad about their work because of our conclusions about where our efforts are best focused — I am incredibly grateful for the work they do”. This is fantastic to hear and makes a big difference to hear, thanks for this.
“Our purpose is not to get people into EA, but to help solve the world’s most pressing problems.”—This might be your purpose, but the reality is that 80,000 hours plays an enormous role in getting peopl einto EA.
Losing some (or a lot) of this impact could have been recognised as a potential large (perhaps the largest) tradeoff with the new direction. What probably hit me most about the announcement was the seeming lack of recognition of the potentially most important tradeoffs—it makes it seem like the tradeoffs haven’t been considered when I’m sure they have.
You’re right that we make bets whatever we do or don’t do.
I’m really sorry this post made you sad and confused. I think that’s an understandable reaction, and I wish I had done more to mitigate the hurt this update could cause. As someone who came into EA via global health, I personally very much value the work that you and others are doing on causes such as global development and factory farming.
A couple comments on other parts of your post in case it’s helpful:
Our purpose is not to get people into EA, but to help solve the world’s most pressing problems. I think the EA community and EA values are still a big part of that. (Arden has written more on 80k’s relationship to the EA community here.) But I also think the world has changed a lot and will change even more in the near future, and it would be surprising if 80k’s best path to impact didn’t change as well. I think focusing our ongoing efforts more on making the development of AGI go well is our best path to impact, building on what 80k has created over time.
But I might be wrong about this, and I think it’s reasonable that others disagree.
I don’t expect the whole EA community to take the same approach. CEA has said it wants to take a “principles-first approach”, rather than focusing more on AI as we will (though to be clear, our focus is driven by our principles, and we want to still communicate that clearly).
I think open communication about what different orgs are prioritising and why is really vital for coordination and to avoid single-player thinking. My hope is that people in the EA community can do this without making others with different cause prio feel bad about their disagreements or differences in strategy. I certainly don’t want anyone doing work in global health or animal welfare to feel bad about their work because of our conclusions about where our efforts are best focused — I am incredibly grateful for the work they do.
Unfortunately I think that all the options in this space involve taking bets in an important way. We also think that it’s costly if users come to our site and don’t quickly understand that we think the current AI situation deserves societal urgency.
On the other costs that you mention in your post, I think I see them as less stark than you do. Quoting Cody’s response to Rocky above:
> We still plan to have our career guide up as a key piece of content, which has been a valuable resource to many people; it explains our views on AI, but also guides people through thinking about cause prioritisation for themselves. And as the post notes, we plan to publish and promote a version of the career guide with a professional publisher in the near future. At the same time, for many years 80k has also made it clear that we prioritise risks from AI as the world’s most pressing problem. So I don’t think I see this as clearly a break from the past as you might.
I also want to thank you for sharing your concerns, which I realise can be hard to do. But it’s really helpful for us to know how people are honestly reacting to what we do.
Thanks for the thoughtful reply really appreciate this. To have the CEO of an org replying to comments is refreshing and I actually think an excellent use of a few hours of time.
”I certainly don’t want anyone doing work in global health or animal welfare to feel bad about their work because of our conclusions about where our efforts are best focused — I am incredibly grateful for the work they do”. This is fantastic to hear and makes a big difference to hear, thanks for this.
“Our purpose is not to get people into EA, but to help solve the world’s most pressing problems.”—This might be your purpose, but the reality is that 80,000 hours plays an enormous role in getting peopl einto EA.
Losing some (or a lot) of this impact could have been recognised as a potential large (perhaps the largest) tradeoff with the new direction. What probably hit me most about the announcement was the seeming lack of recognition of the potentially most important tradeoffs—it makes it seem like the tradeoffs haven’t been considered when I’m sure they have.
You’re right that we make bets whatever we do or don’t do.
Thanks again for the reply!