For the sake of clarity I’ll restate what I think you meant:
We’re not discussing the risk of people taking less impactful career paths than they would have taken counterfactually because we existed (and otherwise they might have only known 80k for example). That is a risk we discuss in the document.
We’re talking specifically about “membership” in the EA community. That people who are less committed \ value aligned \ thoughtful in the way that EAs tend to be \ something else—would now join the community and dilute or erode the things we think are special (and really good) about our community.
Assuming this is what you meant, I’ll write my general thoughts on it:
1. The extent to which this is a risk is very dependent on the strength of the two appearances “very” in your sentence “A career org that (1) was very broad in its focus, and/or very accepting of different views”. While we’re still working out what the borders of our acceptance are (as I think Sella commented in response to your question on agnosticism), we’re not broadening our values or expectations to areas that are well outside the EA community. I don’t currently see a situation where we give advice or a recommendation that isn’t in line with the community in general. It’s worth noting that the scope and level of generality that the EA community engages with in most other interactions (EA Global, charity evaluation orgs, incubation programs, etc.) is much broader than 80K’s current focus. We see our work as matching that broader scope rather than expanding it, and so we don’t believe we’re changing where EA stands on this spectrum—simply applying it to the career space as well.
2. More importantly, even in cases where we could make a recommendation that (for examples) 80k wouldn’t stand behind—our methodology, values, rigor in analysis, etc. should definitely be in line with what currently exists, and is expected, in the community. I can’t promise we won’t reach different conclusions sometimes, but I won’t be “accepting” of people who reach those conclusions in shoddy ways.
3. This is a relatively general point, but it’s important and it mitigates a lot of our risks: In the next few months, we’re not planning to grow, do extensive reaching out, market or try to bring a lot of new people in. That’s explicitly because we want to create content and start working, do our best to evaluate the risks (with the help of the community) - and only start having a large impact once we’re more confident in the strength and direction of that impact.
In a sense (unless we fail pretty badly at evaluating in a few months) - we’re risking a very small harm of a small unknown org and potentially gaining the benefits that could be quite large if we do find that our impact looks good.
The extent to which this is a risk is very dependent on the strength of the two appearances “very” in your sentence “A career org that (1) was very broad in its focus, and/or very accepting of different views”. [...] we’re not broadening our values or expectations to areas that are well outside the EA community.
I think this does basically remove the following potential worry I pointed to:
Or the risk could arise as a result of substantial portions of the EA community being scattered across a huge range of cause areas, or a range that’s not that huge but includes some areas that are probably much less pressing (in expectation).
But it’s not clear to me that it removes this worry I pointed to:
I think this risk could arise even if the set of cause areas is still basically only ones supported by a large portion of current EAs, if this org made it much more likely we’d get lots of new EAs who:
chose these areas for relatively random reasons, and/or
aren’t very thoughtful about the approach they’re taking within the cause area
You do also say “I won’t be “accepting” of people who reach those conclusions in shoddy ways.” But this seems at least somewhat in tension with what seem to be some key parts of the vision for the organisation. E.g., the Impact doc says:
We think it’s very likely that some people might be a good fit for top priority paths but not immediately. This may be because they aren’t ready yet to accept some aspects of EA (e.g. don’t fully accept cause neutrality but are attached to high impact cause areas such as climate change) [...] We think giving them options to start with easier career changes or easier ways to use their career for good may, over time, give them a chance to consider even higher impact changes.
Some people who aren’t ready yet to accept aspects of EA like cause-neutrality might indeed become ready for that later, and that does seem like a benefit of this approach. But some might just continue to not accept those aspects of EA. So if part of your value proposition is specifically that you can appeal to people who are not currently cause-neutral, it seems like that poses a risk of reducing the proportion of EAs as a whole who are cause-neutral (and same maybe goes for some other traits, like willingness to reconsider one’s current job rather than cause area).
To be clear, I think climate change is an important area, and supporting people to be more impactful in that area seems valuable. But here you’re talking about someone who essentially “happens to” be focused on climate change, and doesn’t accept cause neutrality. If everyone in EA was replaced by someone identical to them in most ways, including being focused on the same area, except that they were drawn to that area somewhat randomly rather than through careful thought, I think that’d be a less good community. (I still think such people can of course be impactful, dedicated, good people—I’m just talking about averages and movement strategy, and not meaning to pass personal judgement.)
Do you have thoughts on how to resolve the tension between wanting to bring in people who aren’t (yet) cause-neutral (or willing to reconsider their job/field/whatever) and avoiding partially “eroding” good aspects of EA? (It could be reasonable to just say you’ll experiment at a small scale and reassess after that, or that you think that risk is justified by the various benefits, or something.)
This is something we discussed at length and are still thinking about.
As you write in the end, the usual “I’ll experiment and see” is true, but we have some more specific thoughts as well:
I think there’s a meaningful difference between someone who uses “shoddy” methodology to someone who’s thoughtfully trying to figure out the best course of action and has either not got there yet or still didn’t overcome some bad priors or biases. While I’m sure there are some edge cases, I think most cases aren’t on the edge.
I think most of our decisions are easier in practice than in theory. The content we’ll write will be (to the best of our ability) good content that will showcase how we (and the EA community) believe these issues should be considered. 1:1s or workshops will prioritize people we believe could benefit and could have a meaningful impact, and since we expect us to not live up to demand any time soon, I doubt we’ll have to consider cases that seem detrimental to the community. Finally, our writing, while aiming to be accessible and welcoming to people with a wide variety of views, will describe similar thought processes and discuss similar scopes to the broader EA community (albeit not 80K). As a result, I think it will be comparable to other gateways to the community that exist today.
The above point makes the practical considerations of the near future simpler. It doesn’t mean that we don’t have a lot to think, talk through and figure out regarding what we mean by ‘Agnostic EA’. That’s something that we haven’t stopped discussing since the idea for this came up and I don’t think we’ll stop any time soon.
Thanks for that response! I think you make good points.
Assuming this is what you meant[...]
Yes, I think you’ve captured what I was trying to say.
I should perhaps clarify that I didn’t mean to imply that this risk was very likely from your particular org, or that it the existence of this risk means you shouldn’t try this. I agree in particular that your point 3 is important and mitigates a lot of your risks, and that there’s high information value and low risks from trying this out without yet doing extensive marketing etc.
I was essentially just wondering whether you’d thought about that risk and how you planned to deal with it :)
For the sake of clarity I’ll restate what I think you meant:
We’re not discussing the risk of people taking less impactful career paths than they would have taken counterfactually because we existed (and otherwise they might have only known 80k for example). That is a risk we discuss in the document.
We’re talking specifically about “membership” in the EA community. That people who are less committed \ value aligned \ thoughtful in the way that EAs tend to be \ something else—would now join the community and dilute or erode the things we think are special (and really good) about our community.
Assuming this is what you meant, I’ll write my general thoughts on it:
1. The extent to which this is a risk is very dependent on the strength of the two appearances “very” in your sentence “A career org that (1) was very broad in its focus, and/or very accepting of different views”. While we’re still working out what the borders of our acceptance are (as I think Sella commented in response to your question on agnosticism), we’re not broadening our values or expectations to areas that are well outside the EA community. I don’t currently see a situation where we give advice or a recommendation that isn’t in line with the community in general. It’s worth noting that the scope and level of generality that the EA community engages with in most other interactions (EA Global, charity evaluation orgs, incubation programs, etc.) is much broader than 80K’s current focus. We see our work as matching that broader scope rather than expanding it, and so we don’t believe we’re changing where EA stands on this spectrum—simply applying it to the career space as well.
2. More importantly, even in cases where we could make a recommendation that (for examples) 80k wouldn’t stand behind—our methodology, values, rigor in analysis, etc. should definitely be in line with what currently exists, and is expected, in the community. I can’t promise we won’t reach different conclusions sometimes, but I won’t be “accepting” of people who reach those conclusions in shoddy ways.
3. This is a relatively general point, but it’s important and it mitigates a lot of our risks: In the next few months, we’re not planning to grow, do extensive reaching out, market or try to bring a lot of new people in. That’s explicitly because we want to create content and start working, do our best to evaluate the risks (with the help of the community) - and only start having a large impact once we’re more confident in the strength and direction of that impact.
In a sense (unless we fail pretty badly at evaluating in a few months) - we’re risking a very small harm of a small unknown org and potentially gaining the benefits that could be quite large if we do find that our impact looks good.
I think this does basically remove the following potential worry I pointed to:
But it’s not clear to me that it removes this worry I pointed to:
You do also say “I won’t be “accepting” of people who reach those conclusions in shoddy ways.” But this seems at least somewhat in tension with what seem to be some key parts of the vision for the organisation. E.g., the Impact doc says:
Some people who aren’t ready yet to accept aspects of EA like cause-neutrality might indeed become ready for that later, and that does seem like a benefit of this approach. But some might just continue to not accept those aspects of EA. So if part of your value proposition is specifically that you can appeal to people who are not currently cause-neutral, it seems like that poses a risk of reducing the proportion of EAs as a whole who are cause-neutral (and same maybe goes for some other traits, like willingness to reconsider one’s current job rather than cause area).
To be clear, I think climate change is an important area, and supporting people to be more impactful in that area seems valuable. But here you’re talking about someone who essentially “happens to” be focused on climate change, and doesn’t accept cause neutrality. If everyone in EA was replaced by someone identical to them in most ways, including being focused on the same area, except that they were drawn to that area somewhat randomly rather than through careful thought, I think that’d be a less good community. (I still think such people can of course be impactful, dedicated, good people—I’m just talking about averages and movement strategy, and not meaning to pass personal judgement.)
Do you have thoughts on how to resolve the tension between wanting to bring in people who aren’t (yet) cause-neutral (or willing to reconsider their job/field/whatever) and avoiding partially “eroding” good aspects of EA? (It could be reasonable to just say you’ll experiment at a small scale and reassess after that, or that you think that risk is justified by the various benefits, or something.)
This is something we discussed at length and are still thinking about.
As you write in the end, the usual “I’ll experiment and see” is true, but we have some more specific thoughts as well:
I think there’s a meaningful difference between someone who uses “shoddy” methodology to someone who’s thoughtfully trying to figure out the best course of action and has either not got there yet or still didn’t overcome some bad priors or biases. While I’m sure there are some edge cases, I think most cases aren’t on the edge.
I think most of our decisions are easier in practice than in theory. The content we’ll write will be (to the best of our ability) good content that will showcase how we (and the EA community) believe these issues should be considered. 1:1s or workshops will prioritize people we believe could benefit and could have a meaningful impact, and since we expect us to not live up to demand any time soon, I doubt we’ll have to consider cases that seem detrimental to the community. Finally, our writing, while aiming to be accessible and welcoming to people with a wide variety of views, will describe similar thought processes and discuss similar scopes to the broader EA community (albeit not 80K). As a result, I think it will be comparable to other gateways to the community that exist today.
The above point makes the practical considerations of the near future simpler. It doesn’t mean that we don’t have a lot to think, talk through and figure out regarding what we mean by ‘Agnostic EA’. That’s something that we haven’t stopped discussing since the idea for this came up and I don’t think we’ll stop any time soon.
Thanks for that response! I think you make good points.
Yes, I think you’ve captured what I was trying to say.
I should perhaps clarify that I didn’t mean to imply that this risk was very likely from your particular org, or that it the existence of this risk means you shouldn’t try this. I agree in particular that your point 3 is important and mitigates a lot of your risks, and that there’s high information value and low risks from trying this out without yet doing extensive marketing etc.
I was essentially just wondering whether you’d thought about that risk and how you planned to deal with it :)