There’s one potential risk that occurs to me and that I think wasn’t addressed in the linked docs: A career org that (1) was very broad in its focus, and/or very accepting of different views, but (2) still funnelled people into EA, could potentially erode some of the focus or good distinctive elements of the EA community as a whole in a way that reduces our impact. By being relatively broad, Probably Good might risk causing some degree of that sort of “erosion”.
(Note that I’m not saying this is likely, or that it would outweigh the positive impacts of Probably Good—just raising it as something worth thinking about.)
To illustrate by taking it to an extreme, if 9 out of 10 people one met in the EA community (e.g., at EA events, in Forum discussions) were more like the average current non-EA than the average current EA, it would be a loss less obvious why there’s an EA community at all, and probably more likely that the community would just dissolve into the broader world, or that more distinctive sub-communities would splinter off.[1] It would also be harder and less motivating to coordinate, find ideas relevant to my interests and plans, get useful feedback from the community, etc. And engaging with the EA community might be less engaging for the sort of potential new members we’d be most excited to have (e.g., people who are thoughtful, open-minded, and passionate about impartial altruism).
This is partly because EAs on average seem to have relatively high levels of some good traits (e.g., desire to have an impact, thoughtfulness, intelligence), and to that extent this is somewhat uncomfortable and smacks of elitism. But it’s also partly just because in general communities may coordinate and hang together better and longer if it’s more clear what their purpose/focus is. (E.g., I think the current members of a randomly chosen hobby club would enjoy that club less if it got an influx of new members who were much less keen on that hobby than the current members.)
I think this risk could arise even if the set of cause areas is still basically only ones supported by a large portion of current EAs, if this org made it much more likely we’d get lots of new EAs who:
chose these areas for relatively random reasons, and/or
aren’t very thoughtful about the approach they’re taking within the cause area
E.g., they decided to address the problem through a particular type of job before learning more about the real nature of the problem, and then don’t re-evaluate that decision or listen to feedback on that, and just want advice on precisely how to approach that job or what org to do it at
Or the risk could arise as a result of substantial portions of the EA community being scattered across a huge range of cause areas, or a range that’s not that huge but includes some areas that are probably much less pressing (in expectation). (To be clear, I think there are benefits to EA already representing a variety of cause areas, and I like that about the community—but I think there could be more extreme or less thoughtful versions of that where the downsides would outweigh the benefits.)
I’d be interested to hear whether you think that risk is plausible for an initiative roughly like yours in general, and for your org in particular, and whether you have thoughts on how you might deal with it.
(It seems plausible that there are various ways you can mitigate this risk, or reasons why your current plan might already mostly avoid this risk.)
[1] I think my thinking/phrasing here might be informed by parts of this SSC post, as I read that recently. That said, I can’t recall if that post as a whole supports my points.
For the sake of clarity I’ll restate what I think you meant:
We’re not discussing the risk of people taking less impactful career paths than they would have taken counterfactually because we existed (and otherwise they might have only known 80k for example). That is a risk we discuss in the document.
We’re talking specifically about “membership” in the EA community. That people who are less committed \ value aligned \ thoughtful in the way that EAs tend to be \ something else—would now join the community and dilute or erode the things we think are special (and really good) about our community.
Assuming this is what you meant, I’ll write my general thoughts on it:
1. The extent to which this is a risk is very dependent on the strength of the two appearances “very” in your sentence “A career org that (1) was very broad in its focus, and/or very accepting of different views”. While we’re still working out what the borders of our acceptance are (as I think Sella commented in response to your question on agnosticism), we’re not broadening our values or expectations to areas that are well outside the EA community. I don’t currently see a situation where we give advice or a recommendation that isn’t in line with the community in general. It’s worth noting that the scope and level of generality that the EA community engages with in most other interactions (EA Global, charity evaluation orgs, incubation programs, etc.) is much broader than 80K’s current focus. We see our work as matching that broader scope rather than expanding it, and so we don’t believe we’re changing where EA stands on this spectrum—simply applying it to the career space as well.
2. More importantly, even in cases where we could make a recommendation that (for examples) 80k wouldn’t stand behind—our methodology, values, rigor in analysis, etc. should definitely be in line with what currently exists, and is expected, in the community. I can’t promise we won’t reach different conclusions sometimes, but I won’t be “accepting” of people who reach those conclusions in shoddy ways.
3. This is a relatively general point, but it’s important and it mitigates a lot of our risks: In the next few months, we’re not planning to grow, do extensive reaching out, market or try to bring a lot of new people in. That’s explicitly because we want to create content and start working, do our best to evaluate the risks (with the help of the community) - and only start having a large impact once we’re more confident in the strength and direction of that impact.
In a sense (unless we fail pretty badly at evaluating in a few months) - we’re risking a very small harm of a small unknown org and potentially gaining the benefits that could be quite large if we do find that our impact looks good.
The extent to which this is a risk is very dependent on the strength of the two appearances “very” in your sentence “A career org that (1) was very broad in its focus, and/or very accepting of different views”. [...] we’re not broadening our values or expectations to areas that are well outside the EA community.
I think this does basically remove the following potential worry I pointed to:
Or the risk could arise as a result of substantial portions of the EA community being scattered across a huge range of cause areas, or a range that’s not that huge but includes some areas that are probably much less pressing (in expectation).
But it’s not clear to me that it removes this worry I pointed to:
I think this risk could arise even if the set of cause areas is still basically only ones supported by a large portion of current EAs, if this org made it much more likely we’d get lots of new EAs who:
chose these areas for relatively random reasons, and/or
aren’t very thoughtful about the approach they’re taking within the cause area
You do also say “I won’t be “accepting” of people who reach those conclusions in shoddy ways.” But this seems at least somewhat in tension with what seem to be some key parts of the vision for the organisation. E.g., the Impact doc says:
We think it’s very likely that some people might be a good fit for top priority paths but not immediately. This may be because they aren’t ready yet to accept some aspects of EA (e.g. don’t fully accept cause neutrality but are attached to high impact cause areas such as climate change) [...] We think giving them options to start with easier career changes or easier ways to use their career for good may, over time, give them a chance to consider even higher impact changes.
Some people who aren’t ready yet to accept aspects of EA like cause-neutrality might indeed become ready for that later, and that does seem like a benefit of this approach. But some might just continue to not accept those aspects of EA. So if part of your value proposition is specifically that you can appeal to people who are not currently cause-neutral, it seems like that poses a risk of reducing the proportion of EAs as a whole who are cause-neutral (and same maybe goes for some other traits, like willingness to reconsider one’s current job rather than cause area).
To be clear, I think climate change is an important area, and supporting people to be more impactful in that area seems valuable. But here you’re talking about someone who essentially “happens to” be focused on climate change, and doesn’t accept cause neutrality. If everyone in EA was replaced by someone identical to them in most ways, including being focused on the same area, except that they were drawn to that area somewhat randomly rather than through careful thought, I think that’d be a less good community. (I still think such people can of course be impactful, dedicated, good people—I’m just talking about averages and movement strategy, and not meaning to pass personal judgement.)
Do you have thoughts on how to resolve the tension between wanting to bring in people who aren’t (yet) cause-neutral (or willing to reconsider their job/field/whatever) and avoiding partially “eroding” good aspects of EA? (It could be reasonable to just say you’ll experiment at a small scale and reassess after that, or that you think that risk is justified by the various benefits, or something.)
This is something we discussed at length and are still thinking about.
As you write in the end, the usual “I’ll experiment and see” is true, but we have some more specific thoughts as well:
I think there’s a meaningful difference between someone who uses “shoddy” methodology to someone who’s thoughtfully trying to figure out the best course of action and has either not got there yet or still didn’t overcome some bad priors or biases. While I’m sure there are some edge cases, I think most cases aren’t on the edge.
I think most of our decisions are easier in practice than in theory. The content we’ll write will be (to the best of our ability) good content that will showcase how we (and the EA community) believe these issues should be considered. 1:1s or workshops will prioritize people we believe could benefit and could have a meaningful impact, and since we expect us to not live up to demand any time soon, I doubt we’ll have to consider cases that seem detrimental to the community. Finally, our writing, while aiming to be accessible and welcoming to people with a wide variety of views, will describe similar thought processes and discuss similar scopes to the broader EA community (albeit not 80K). As a result, I think it will be comparable to other gateways to the community that exist today.
The above point makes the practical considerations of the near future simpler. It doesn’t mean that we don’t have a lot to think, talk through and figure out regarding what we mean by ‘Agnostic EA’. That’s something that we haven’t stopped discussing since the idea for this came up and I don’t think we’ll stop any time soon.
Thanks for that response! I think you make good points.
Assuming this is what you meant[...]
Yes, I think you’ve captured what I was trying to say.
I should perhaps clarify that I didn’t mean to imply that this risk was very likely from your particular org, or that it the existence of this risk means you shouldn’t try this. I agree in particular that your point 3 is important and mitigates a lot of your risks, and that there’s high information value and low risks from trying this out without yet doing extensive marketing etc.
I was essentially just wondering whether you’d thought about that risk and how you planned to deal with it :)
There’s one potential risk that occurs to me and that I think wasn’t addressed in the linked docs: A career org that (1) was very broad in its focus, and/or very accepting of different views, but (2) still funnelled people into EA, could potentially erode some of the focus or good distinctive elements of the EA community as a whole in a way that reduces our impact. By being relatively broad, Probably Good might risk causing some degree of that sort of “erosion”.
(Note that I’m not saying this is likely, or that it would outweigh the positive impacts of Probably Good—just raising it as something worth thinking about.)
To illustrate by taking it to an extreme, if 9 out of 10 people one met in the EA community (e.g., at EA events, in Forum discussions) were more like the average current non-EA than the average current EA, it would be a loss less obvious why there’s an EA community at all, and probably more likely that the community would just dissolve into the broader world, or that more distinctive sub-communities would splinter off.[1] It would also be harder and less motivating to coordinate, find ideas relevant to my interests and plans, get useful feedback from the community, etc. And engaging with the EA community might be less engaging for the sort of potential new members we’d be most excited to have (e.g., people who are thoughtful, open-minded, and passionate about impartial altruism).
This is partly because EAs on average seem to have relatively high levels of some good traits (e.g., desire to have an impact, thoughtfulness, intelligence), and to that extent this is somewhat uncomfortable and smacks of elitism. But it’s also partly just because in general communities may coordinate and hang together better and longer if it’s more clear what their purpose/focus is. (E.g., I think the current members of a randomly chosen hobby club would enjoy that club less if it got an influx of new members who were much less keen on that hobby than the current members.)
I think this risk could arise even if the set of cause areas is still basically only ones supported by a large portion of current EAs, if this org made it much more likely we’d get lots of new EAs who:
chose these areas for relatively random reasons, and/or
aren’t very thoughtful about the approach they’re taking within the cause area
E.g., they decided to address the problem through a particular type of job before learning more about the real nature of the problem, and then don’t re-evaluate that decision or listen to feedback on that, and just want advice on precisely how to approach that job or what org to do it at
Or the risk could arise as a result of substantial portions of the EA community being scattered across a huge range of cause areas, or a range that’s not that huge but includes some areas that are probably much less pressing (in expectation). (To be clear, I think there are benefits to EA already representing a variety of cause areas, and I like that about the community—but I think there could be more extreme or less thoughtful versions of that where the downsides would outweigh the benefits.)
I’d be interested to hear whether you think that risk is plausible for an initiative roughly like yours in general, and for your org in particular, and whether you have thoughts on how you might deal with it.
(It seems plausible that there are various ways you can mitigate this risk, or reasons why your current plan might already mostly avoid this risk.)
[1] I think my thinking/phrasing here might be informed by parts of this SSC post, as I read that recently. That said, I can’t recall if that post as a whole supports my points.
For the sake of clarity I’ll restate what I think you meant:
We’re not discussing the risk of people taking less impactful career paths than they would have taken counterfactually because we existed (and otherwise they might have only known 80k for example). That is a risk we discuss in the document.
We’re talking specifically about “membership” in the EA community. That people who are less committed \ value aligned \ thoughtful in the way that EAs tend to be \ something else—would now join the community and dilute or erode the things we think are special (and really good) about our community.
Assuming this is what you meant, I’ll write my general thoughts on it:
1. The extent to which this is a risk is very dependent on the strength of the two appearances “very” in your sentence “A career org that (1) was very broad in its focus, and/or very accepting of different views”. While we’re still working out what the borders of our acceptance are (as I think Sella commented in response to your question on agnosticism), we’re not broadening our values or expectations to areas that are well outside the EA community. I don’t currently see a situation where we give advice or a recommendation that isn’t in line with the community in general. It’s worth noting that the scope and level of generality that the EA community engages with in most other interactions (EA Global, charity evaluation orgs, incubation programs, etc.) is much broader than 80K’s current focus. We see our work as matching that broader scope rather than expanding it, and so we don’t believe we’re changing where EA stands on this spectrum—simply applying it to the career space as well.
2. More importantly, even in cases where we could make a recommendation that (for examples) 80k wouldn’t stand behind—our methodology, values, rigor in analysis, etc. should definitely be in line with what currently exists, and is expected, in the community. I can’t promise we won’t reach different conclusions sometimes, but I won’t be “accepting” of people who reach those conclusions in shoddy ways.
3. This is a relatively general point, but it’s important and it mitigates a lot of our risks: In the next few months, we’re not planning to grow, do extensive reaching out, market or try to bring a lot of new people in. That’s explicitly because we want to create content and start working, do our best to evaluate the risks (with the help of the community) - and only start having a large impact once we’re more confident in the strength and direction of that impact.
In a sense (unless we fail pretty badly at evaluating in a few months) - we’re risking a very small harm of a small unknown org and potentially gaining the benefits that could be quite large if we do find that our impact looks good.
I think this does basically remove the following potential worry I pointed to:
But it’s not clear to me that it removes this worry I pointed to:
You do also say “I won’t be “accepting” of people who reach those conclusions in shoddy ways.” But this seems at least somewhat in tension with what seem to be some key parts of the vision for the organisation. E.g., the Impact doc says:
Some people who aren’t ready yet to accept aspects of EA like cause-neutrality might indeed become ready for that later, and that does seem like a benefit of this approach. But some might just continue to not accept those aspects of EA. So if part of your value proposition is specifically that you can appeal to people who are not currently cause-neutral, it seems like that poses a risk of reducing the proportion of EAs as a whole who are cause-neutral (and same maybe goes for some other traits, like willingness to reconsider one’s current job rather than cause area).
To be clear, I think climate change is an important area, and supporting people to be more impactful in that area seems valuable. But here you’re talking about someone who essentially “happens to” be focused on climate change, and doesn’t accept cause neutrality. If everyone in EA was replaced by someone identical to them in most ways, including being focused on the same area, except that they were drawn to that area somewhat randomly rather than through careful thought, I think that’d be a less good community. (I still think such people can of course be impactful, dedicated, good people—I’m just talking about averages and movement strategy, and not meaning to pass personal judgement.)
Do you have thoughts on how to resolve the tension between wanting to bring in people who aren’t (yet) cause-neutral (or willing to reconsider their job/field/whatever) and avoiding partially “eroding” good aspects of EA? (It could be reasonable to just say you’ll experiment at a small scale and reassess after that, or that you think that risk is justified by the various benefits, or something.)
This is something we discussed at length and are still thinking about.
As you write in the end, the usual “I’ll experiment and see” is true, but we have some more specific thoughts as well:
I think there’s a meaningful difference between someone who uses “shoddy” methodology to someone who’s thoughtfully trying to figure out the best course of action and has either not got there yet or still didn’t overcome some bad priors or biases. While I’m sure there are some edge cases, I think most cases aren’t on the edge.
I think most of our decisions are easier in practice than in theory. The content we’ll write will be (to the best of our ability) good content that will showcase how we (and the EA community) believe these issues should be considered. 1:1s or workshops will prioritize people we believe could benefit and could have a meaningful impact, and since we expect us to not live up to demand any time soon, I doubt we’ll have to consider cases that seem detrimental to the community. Finally, our writing, while aiming to be accessible and welcoming to people with a wide variety of views, will describe similar thought processes and discuss similar scopes to the broader EA community (albeit not 80K). As a result, I think it will be comparable to other gateways to the community that exist today.
The above point makes the practical considerations of the near future simpler. It doesn’t mean that we don’t have a lot to think, talk through and figure out regarding what we mean by ‘Agnostic EA’. That’s something that we haven’t stopped discussing since the idea for this came up and I don’t think we’ll stop any time soon.
Thanks for that response! I think you make good points.
Yes, I think you’ve captured what I was trying to say.
I should perhaps clarify that I didn’t mean to imply that this risk was very likely from your particular org, or that it the existence of this risk means you shouldn’t try this. I agree in particular that your point 3 is important and mitigates a lot of your risks, and that there’s high information value and low risks from trying this out without yet doing extensive marketing etc.
I was essentially just wondering whether you’d thought about that risk and how you planned to deal with it :)