Yes, there’s a chance it could be discouraging and if there are ways to improve it without sacrificing accuracy, I’d like to see that happen.
On the other hand, if you have strong reason to believe that some cause areas have orders of magnitude more impact more influence than others, then you will often achieve more impact by slightly increasing the number of people working on these priority areas than by greatly increasing the number of people working on less impactful areas. In other words, you can often have more impact accurately representing your beliefs because it can be hard for the benefits of serving a broader audience to outweigh the impact of persuading more people to focus on something important.
I see what you are saying, but given the huge impact 80k hours has, I feel a bit uneasy about this. What if those many, many “people working on less impactful areas” tell their friends etc. about EA and 80k, and some of them get into x-risk, AI safety and so on? I wouldn’t unterestimate the community building aspect here, and don’t think potentially “rebuffing” a large amount of active EAs would be a net positive.
then you will often achieve more impact by slightly increasing the number of people working on these priority areas than by greatly increasing the number of people working on less impactful areas
The whole point of having “neutral” EA entities like CEA and 80000 is to avoid this line of thinking and pursue a “big tent” approach, rather than smothering causes deemed ineffective.
If they were following this logic ten years ago when AI-risk was a fringe cause, they would have never have accepted it into the movement at all, believing it to be ineffective and bad PR.
Ai-risk orgs already exist, and are free to make their case on relative cause areas. on their own platforms. They shouldn’t be doing this subtly on ostensibly neutral platforms.
“The whole point of having “neutral” EA entities like CEA and 80000 is to avoid this line of thinking”—Hmm… describing this as the “whole point” seems a bit strong?
I agree that sometimes there’s value in adopting a stance of neutrality. I’m still not entirely sure why I feel this way, but I have an intuition that CEA should learn more toward neutrality than 80,000 Hours. Perhaps, it’s because I see CEA as more focused on community building and taking responsibility for the community overall. Even then, I wouldn’t insist that CEA be purely neutral, but rather strike a balance between what its views are and what the wider EA community views are.
One area where I agree though is that organisations should be transparent in terms of what they represent.
A perfect example of the dual and sometimes diametrically opposed meanings of “neutrality” in EA: to some it means neutral between cause areas, to some it means neutral in our approach of how to do the most good.
Yes, there’s a chance it could be discouraging and if there are ways to improve it without sacrificing accuracy, I’d like to see that happen.
On the other hand, if you have strong reason to believe that some cause areas have orders of magnitude more impact more influence than others, then you will often achieve more impact by slightly increasing the number of people working on these priority areas than by greatly increasing the number of people working on less impactful areas. In other words, you can often have more impact accurately representing your beliefs because it can be hard for the benefits of serving a broader audience to outweigh the impact of persuading more people to focus on something important.
I see what you are saying, but given the huge impact 80k hours has, I feel a bit uneasy about this. What if those many, many “people working on less impactful areas” tell their friends etc. about EA and 80k, and some of them get into x-risk, AI safety and so on? I wouldn’t unterestimate the community building aspect here, and don’t think potentially “rebuffing” a large amount of active EAs would be a net positive.
The whole point of having “neutral” EA entities like CEA and 80000 is to avoid this line of thinking and pursue a “big tent” approach, rather than smothering causes deemed ineffective.
If they were following this logic ten years ago when AI-risk was a fringe cause, they would have never have accepted it into the movement at all, believing it to be ineffective and bad PR.
Ai-risk orgs already exist, and are free to make their case on relative cause areas. on their own platforms. They shouldn’t be doing this subtly on ostensibly neutral platforms.
“The whole point of having “neutral” EA entities like CEA and 80000 is to avoid this line of thinking”—Hmm… describing this as the “whole point” seems a bit strong?
I agree that sometimes there’s value in adopting a stance of neutrality. I’m still not entirely sure why I feel this way, but I have an intuition that CEA should learn more toward neutrality than 80,000 Hours. Perhaps, it’s because I see CEA as more focused on community building and taking responsibility for the community overall. Even then, I wouldn’t insist that CEA be purely neutral, but rather strike a balance between what its views are and what the wider EA community views are.
One area where I agree though is that organisations should be transparent in terms of what they represent.
A perfect example of the dual and sometimes diametrically opposed meanings of “neutrality” in EA: to some it means neutral between cause areas, to some it means neutral in our approach of how to do the most good.