The extent to which this is a risk is very dependent on the strength of the two appearances âveryâ in your sentence âA career org that (1) was very broad in its focus, and/âor very accepting of different viewsâ. [...] weâre not broadening our values or expectations to areas that are well outside the EA community.
I think this does basically remove the following potential worry I pointed to:
Or the risk could arise as a result of substantial portions of the EA community being scattered across a huge range of cause areas, or a range thatâs not that huge but includes some areas that are probably much less pressing (in expectation).
But itâs not clear to me that it removes this worry I pointed to:
I think this risk could arise even if the set of cause areas is still basically only ones supported by a large portion of current EAs, if this org made it much more likely weâd get lots of new EAs who:
chose these areas for relatively random reasons, and/âor
arenât very thoughtful about the approach theyâre taking within the cause area
You do also say âI wonât be âacceptingâ of people who reach those conclusions in shoddy ways.â But this seems at least somewhat in tension with what seem to be some key parts of the vision for the organisation. E.g., the Impact doc says:
We think itâs very likely that some people might be a good fit for top priority paths but not immediately. This may be because they arenât ready yet to accept some aspects of EA (e.g. donât fully accept cause neutrality but are attached to high impact cause areas such as climate change) [...] We think giving them options to start with easier career changes or easier ways to use their career for good may, over time, give them a chance to consider even higher impact changes.
Some people who arenât ready yet to accept aspects of EA like cause-neutrality might indeed become ready for that later, and that does seem like a benefit of this approach. But some might just continue to not accept those aspects of EA. So if part of your value proposition is specifically that you can appeal to people who are not currently cause-neutral, it seems like that poses a risk of reducing the proportion of EAs as a whole who are cause-neutral (and same maybe goes for some other traits, like willingness to reconsider oneâs current job rather than cause area).
To be clear, I think climate change is an important area, and supporting people to be more impactful in that area seems valuable. But here youâre talking about someone who essentially âhappens toâ be focused on climate change, and doesnât accept cause neutrality. If everyone in EA was replaced by someone identical to them in most ways, including being focused on the same area, except that they were drawn to that area somewhat randomly rather than through careful thought, I think thatâd be a less good community. (I still think such people can of course be impactful, dedicated, good peopleâIâm just talking about averages and movement strategy, and not meaning to pass personal judgement.)
Do you have thoughts on how to resolve the tension between wanting to bring in people who arenât (yet) cause-neutral (or willing to reconsider their job/âfield/âwhatever) and avoiding partially âerodingâ good aspects of EA? (It could be reasonable to just say youâll experiment at a small scale and reassess after that, or that you think that risk is justified by the various benefits, or something.)
This is something we discussed at length and are still thinking about.
As you write in the end, the usual âIâll experiment and seeâ is true, but we have some more specific thoughts as well:
I think thereâs a meaningful difference between someone who uses âshoddyâ methodology to someone whoâs thoughtfully trying to figure out the best course of action and has either not got there yet or still didnât overcome some bad priors or biases. While Iâm sure there are some edge cases, I think most cases arenât on the edge.
I think most of our decisions are easier in practice than in theory. The content weâll write will be (to the best of our ability) good content that will showcase how we (and the EA community) believe these issues should be considered. 1:1s or workshops will prioritize people we believe could benefit and could have a meaningful impact, and since we expect us to not live up to demand any time soon, I doubt weâll have to consider cases that seem detrimental to the community. Finally, our writing, while aiming to be accessible and welcoming to people with a wide variety of views, will describe similar thought processes and discuss similar scopes to the broader EA community (albeit not 80K). As a result, I think it will be comparable to other gateways to the community that exist today.
The above point makes the practical considerations of the near future simpler. It doesnât mean that we donât have a lot to think, talk through and figure out regarding what we mean by âAgnostic EAâ. Thatâs something that we havenât stopped discussing since the idea for this came up and I donât think weâll stop any time soon.
I think this does basically remove the following potential worry I pointed to:
But itâs not clear to me that it removes this worry I pointed to:
You do also say âI wonât be âacceptingâ of people who reach those conclusions in shoddy ways.â But this seems at least somewhat in tension with what seem to be some key parts of the vision for the organisation. E.g., the Impact doc says:
Some people who arenât ready yet to accept aspects of EA like cause-neutrality might indeed become ready for that later, and that does seem like a benefit of this approach. But some might just continue to not accept those aspects of EA. So if part of your value proposition is specifically that you can appeal to people who are not currently cause-neutral, it seems like that poses a risk of reducing the proportion of EAs as a whole who are cause-neutral (and same maybe goes for some other traits, like willingness to reconsider oneâs current job rather than cause area).
To be clear, I think climate change is an important area, and supporting people to be more impactful in that area seems valuable. But here youâre talking about someone who essentially âhappens toâ be focused on climate change, and doesnât accept cause neutrality. If everyone in EA was replaced by someone identical to them in most ways, including being focused on the same area, except that they were drawn to that area somewhat randomly rather than through careful thought, I think thatâd be a less good community. (I still think such people can of course be impactful, dedicated, good peopleâIâm just talking about averages and movement strategy, and not meaning to pass personal judgement.)
Do you have thoughts on how to resolve the tension between wanting to bring in people who arenât (yet) cause-neutral (or willing to reconsider their job/âfield/âwhatever) and avoiding partially âerodingâ good aspects of EA? (It could be reasonable to just say youâll experiment at a small scale and reassess after that, or that you think that risk is justified by the various benefits, or something.)
This is something we discussed at length and are still thinking about.
As you write in the end, the usual âIâll experiment and seeâ is true, but we have some more specific thoughts as well:
I think thereâs a meaningful difference between someone who uses âshoddyâ methodology to someone whoâs thoughtfully trying to figure out the best course of action and has either not got there yet or still didnât overcome some bad priors or biases. While Iâm sure there are some edge cases, I think most cases arenât on the edge.
I think most of our decisions are easier in practice than in theory. The content weâll write will be (to the best of our ability) good content that will showcase how we (and the EA community) believe these issues should be considered. 1:1s or workshops will prioritize people we believe could benefit and could have a meaningful impact, and since we expect us to not live up to demand any time soon, I doubt weâll have to consider cases that seem detrimental to the community. Finally, our writing, while aiming to be accessible and welcoming to people with a wide variety of views, will describe similar thought processes and discuss similar scopes to the broader EA community (albeit not 80K). As a result, I think it will be comparable to other gateways to the community that exist today.
The above point makes the practical considerations of the near future simpler. It doesnât mean that we donât have a lot to think, talk through and figure out regarding what we mean by âAgnostic EAâ. Thatâs something that we havenât stopped discussing since the idea for this came up and I donât think weâll stop any time soon.