then you will often achieve more impact by slightly increasing the number of people working on these priority areas than by greatly increasing the number of people working on less impactful areas
The whole point of having “neutral” EA entities like CEA and 80000 is to avoid this line of thinking and pursue a “big tent” approach, rather than smothering causes deemed ineffective.
If they were following this logic ten years ago when AI-risk was a fringe cause, they would have never have accepted it into the movement at all, believing it to be ineffective and bad PR.
Ai-risk orgs already exist, and are free to make their case on relative cause areas. on their own platforms. They shouldn’t be doing this subtly on ostensibly neutral platforms.
“The whole point of having “neutral” EA entities like CEA and 80000 is to avoid this line of thinking”—Hmm… describing this as the “whole point” seems a bit strong?
I agree that sometimes there’s value in adopting a stance of neutrality. I’m still not entirely sure why I feel this way, but I have an intuition that CEA should learn more toward neutrality than 80,000 Hours. Perhaps, it’s because I see CEA as more focused on community building and taking responsibility for the community overall. Even then, I wouldn’t insist that CEA be purely neutral, but rather strike a balance between what its views are and what the wider EA community views are.
One area where I agree though is that organisations should be transparent in terms of what they represent.
A perfect example of the dual and sometimes diametrically opposed meanings of “neutrality” in EA: to some it means neutral between cause areas, to some it means neutral in our approach of how to do the most good.
The whole point of having “neutral” EA entities like CEA and 80000 is to avoid this line of thinking and pursue a “big tent” approach, rather than smothering causes deemed ineffective.
If they were following this logic ten years ago when AI-risk was a fringe cause, they would have never have accepted it into the movement at all, believing it to be ineffective and bad PR.
Ai-risk orgs already exist, and are free to make their case on relative cause areas. on their own platforms. They shouldn’t be doing this subtly on ostensibly neutral platforms.
“The whole point of having “neutral” EA entities like CEA and 80000 is to avoid this line of thinking”—Hmm… describing this as the “whole point” seems a bit strong?
I agree that sometimes there’s value in adopting a stance of neutrality. I’m still not entirely sure why I feel this way, but I have an intuition that CEA should learn more toward neutrality than 80,000 Hours. Perhaps, it’s because I see CEA as more focused on community building and taking responsibility for the community overall. Even then, I wouldn’t insist that CEA be purely neutral, but rather strike a balance between what its views are and what the wider EA community views are.
One area where I agree though is that organisations should be transparent in terms of what they represent.
A perfect example of the dual and sometimes diametrically opposed meanings of “neutrality” in EA: to some it means neutral between cause areas, to some it means neutral in our approach of how to do the most good.