Hmm, it’s funny, this post comes at a moment when I’m heavily considering moving in the opposite direction with my EA university group (towards being more selective and focused on EA-core cause areas). I’d like to know what you think of the reason I thought for doing so.
My main worry is that as the interests of EA members broaden (e.g. to include helping locally), the EA establishment will have less concrete recommendations to offer and people will not have a chance to truly internalize some core EA principles (e.g. amounts matter, doubt in the absence of measurement).
That has been an especially salient problem for my group, given that we live in a middle-income country (Colombia) and many people feel most excited about helping within our country. However, when I’ve heard them make plans for how they would help, I struggle to see what difference we made by presenting them EA ideas. They tend to choose causes more by their previous emotional connection than by attributes that suggest a better opportunity to help (e.g. by using the SNT framework). My expectations are that if we emphasize more the distinctive aspects of EA (and the concrete recommendations they imply), people will have a better chance to update on the ways that mainstream EA differs from what they already believed and we will have a better shot at producing some counterfactual impact.
(Though, as a caution, it’s possible that the tendency for members of my group to not realize when EA ideas differ from their own may come from my particular aversion to openly questioning or contradicting people, rather than from the member’s interest in less-explored areas for helping.)
What stops AI Safety orgs from just hiring ML talent outside EA for their junior/more generic roles?