Thereâs one potential risk that occurs to me and that I think wasnât addressed in the linked docs: A career org that (1) was very broad in its focus, and/âor very accepting of different views, but (2) still funnelled people into EA, could potentially erode some of the focus or good distinctive elements of the EA community as a whole in a way that reduces our impact. By being relatively broad, Probably Good might risk causing some degree of that sort of âerosionâ.
(Note that Iâm not saying this is likely, or that it would outweigh the positive impacts of Probably Goodâjust raising it as something worth thinking about.)
To illustrate by taking it to an extreme, if 9 out of 10 people one met in the EA community (e.g., at EA events, in Forum discussions) were more like the average current non-EA than the average current EA, it would be a loss less obvious why thereâs an EA community at all, and probably more likely that the community would just dissolve into the broader world, or that more distinctive sub-communities would splinter off.[1] It would also be harder and less motivating to coordinate, find ideas relevant to my interests and plans, get useful feedback from the community, etc. And engaging with the EA community might be less engaging for the sort of potential new members weâd be most excited to have (e.g., people who are thoughtful, open-minded, and passionate about impartial altruism).
This is partly because EAs on average seem to have relatively high levels of some good traits (e.g., desire to have an impact, thoughtfulness, intelligence), and to that extent this is somewhat uncomfortable and smacks of elitism. But itâs also partly just because in general communities may coordinate and hang together better and longer if itâs more clear what their purpose/âfocus is. (E.g., I think the current members of a randomly chosen hobby club would enjoy that club less if it got an influx of new members who were much less keen on that hobby than the current members.)
I think this risk could arise even if the set of cause areas is still basically only ones supported by a large portion of current EAs, if this org made it much more likely weâd get lots of new EAs who:
chose these areas for relatively random reasons, and/âor
arenât very thoughtful about the approach theyâre taking within the cause area
E.g., they decided to address the problem through a particular type of job before learning more about the real nature of the problem, and then donât re-evaluate that decision or listen to feedback on that, and just want advice on precisely how to approach that job or what org to do it at
Or the risk could arise as a result of substantial portions of the EA community being scattered across a huge range of cause areas, or a range thatâs not that huge but includes some areas that are probably much less pressing (in expectation). (To be clear, I think there are benefits to EA already representing a variety of cause areas, and I like that about the communityâbut I think there could be more extreme or less thoughtful versions of that where the downsides would outweigh the benefits.)
Iâd be interested to hear whether you think that risk is plausible for an initiative roughly like yours in general, and for your org in particular, and whether you have thoughts on how you might deal with it.
(It seems plausible that there are various ways you can mitigate this risk, or reasons why your current plan might already mostly avoid this risk.)
[1] I think my thinking/âphrasing here might be informed by parts of this SSC post, as I read that recently. That said, I canât recall if that post as a whole supports my points.
For the sake of clarity Iâll restate what I think you meant:
Weâre not discussing the risk of people taking less impactful career paths than they would have taken counterfactually because we existed (and otherwise they might have only known 80k for example). That is a risk we discuss in the document.
Weâre talking specifically about âmembershipâ in the EA community. That people who are less committed \ value aligned \ thoughtful in the way that EAs tend to be \ something elseâwould now join the community and dilute or erode the things we think are special (and really good) about our community.
Assuming this is what you meant, Iâll write my general thoughts on it:
1. The extent to which this is a risk is very dependent on the strength of the two appearances âveryâ in your sentence âA career org that (1) was very broad in its focus, and/âor very accepting of different viewsâ. While weâre still working out what the borders of our acceptance are (as I think Sella commented in response to your question on agnosticism), weâre not broadening our values or expectations to areas that are well outside the EA community. I donât currently see a situation where we give advice or a recommendation that isnât in line with the community in general. Itâs worth noting that the scope and level of generality that the EA community engages with in most other interactions (EA Global, charity evaluation orgs, incubation programs, etc.) is much broader than 80Kâs current focus. We see our work as matching that broader scope rather than expanding it, and so we donât believe weâre changing where EA stands on this spectrumâsimply applying it to the career space as well.
2. More importantly, even in cases where we could make a recommendation that (for examples) 80k wouldnât stand behindâour methodology, values, rigor in analysis, etc. should definitely be in line with what currently exists, and is expected, in the community. I canât promise we wonât reach different conclusions sometimes, but I wonât be âacceptingâ of people who reach those conclusions in shoddy ways.
3. This is a relatively general point, but itâs important and it mitigates a lot of our risks: In the next few months, weâre not planning to grow, do extensive reaching out, market or try to bring a lot of new people in. Thatâs explicitly because we want to create content and start working, do our best to evaluate the risks (with the help of the community) - and only start having a large impact once weâre more confident in the strength and direction of that impact.
In a sense (unless we fail pretty badly at evaluating in a few months) - weâre risking a very small harm of a small unknown org and potentially gaining the benefits that could be quite large if we do find that our impact looks good.
The extent to which this is a risk is very dependent on the strength of the two appearances âveryâ in your sentence âA career org that (1) was very broad in its focus, and/âor very accepting of different viewsâ. [...] weâre not broadening our values or expectations to areas that are well outside the EA community.
I think this does basically remove the following potential worry I pointed to:
Or the risk could arise as a result of substantial portions of the EA community being scattered across a huge range of cause areas, or a range thatâs not that huge but includes some areas that are probably much less pressing (in expectation).
But itâs not clear to me that it removes this worry I pointed to:
I think this risk could arise even if the set of cause areas is still basically only ones supported by a large portion of current EAs, if this org made it much more likely weâd get lots of new EAs who:
chose these areas for relatively random reasons, and/âor
arenât very thoughtful about the approach theyâre taking within the cause area
You do also say âI wonât be âacceptingâ of people who reach those conclusions in shoddy ways.â But this seems at least somewhat in tension with what seem to be some key parts of the vision for the organisation. E.g., the Impact doc says:
We think itâs very likely that some people might be a good fit for top priority paths but not immediately. This may be because they arenât ready yet to accept some aspects of EA (e.g. donât fully accept cause neutrality but are attached to high impact cause areas such as climate change) [...] We think giving them options to start with easier career changes or easier ways to use their career for good may, over time, give them a chance to consider even higher impact changes.
Some people who arenât ready yet to accept aspects of EA like cause-neutrality might indeed become ready for that later, and that does seem like a benefit of this approach. But some might just continue to not accept those aspects of EA. So if part of your value proposition is specifically that you can appeal to people who are not currently cause-neutral, it seems like that poses a risk of reducing the proportion of EAs as a whole who are cause-neutral (and same maybe goes for some other traits, like willingness to reconsider oneâs current job rather than cause area).
To be clear, I think climate change is an important area, and supporting people to be more impactful in that area seems valuable. But here youâre talking about someone who essentially âhappens toâ be focused on climate change, and doesnât accept cause neutrality. If everyone in EA was replaced by someone identical to them in most ways, including being focused on the same area, except that they were drawn to that area somewhat randomly rather than through careful thought, I think thatâd be a less good community. (I still think such people can of course be impactful, dedicated, good peopleâIâm just talking about averages and movement strategy, and not meaning to pass personal judgement.)
Do you have thoughts on how to resolve the tension between wanting to bring in people who arenât (yet) cause-neutral (or willing to reconsider their job/âfield/âwhatever) and avoiding partially âerodingâ good aspects of EA? (It could be reasonable to just say youâll experiment at a small scale and reassess after that, or that you think that risk is justified by the various benefits, or something.)
This is something we discussed at length and are still thinking about.
As you write in the end, the usual âIâll experiment and seeâ is true, but we have some more specific thoughts as well:
I think thereâs a meaningful difference between someone who uses âshoddyâ methodology to someone whoâs thoughtfully trying to figure out the best course of action and has either not got there yet or still didnât overcome some bad priors or biases. While Iâm sure there are some edge cases, I think most cases arenât on the edge.
I think most of our decisions are easier in practice than in theory. The content weâll write will be (to the best of our ability) good content that will showcase how we (and the EA community) believe these issues should be considered. 1:1s or workshops will prioritize people we believe could benefit and could have a meaningful impact, and since we expect us to not live up to demand any time soon, I doubt weâll have to consider cases that seem detrimental to the community. Finally, our writing, while aiming to be accessible and welcoming to people with a wide variety of views, will describe similar thought processes and discuss similar scopes to the broader EA community (albeit not 80K). As a result, I think it will be comparable to other gateways to the community that exist today.
The above point makes the practical considerations of the near future simpler. It doesnât mean that we donât have a lot to think, talk through and figure out regarding what we mean by âAgnostic EAâ. Thatâs something that we havenât stopped discussing since the idea for this came up and I donât think weâll stop any time soon.
Thanks for that response! I think you make good points.
Assuming this is what you meant[...]
Yes, I think youâve captured what I was trying to say.
I should perhaps clarify that I didnât mean to imply that this risk was very likely from your particular org, or that it the existence of this risk means you shouldnât try this. I agree in particular that your point 3 is important and mitigates a lot of your risks, and that thereâs high information value and low risks from trying this out without yet doing extensive marketing etc.
I was essentially just wondering whether youâd thought about that risk and how you planned to deal with it :)
Thereâs one potential risk that occurs to me and that I think wasnât addressed in the linked docs: A career org that (1) was very broad in its focus, and/âor very accepting of different views, but (2) still funnelled people into EA, could potentially erode some of the focus or good distinctive elements of the EA community as a whole in a way that reduces our impact. By being relatively broad, Probably Good might risk causing some degree of that sort of âerosionâ.
(Note that Iâm not saying this is likely, or that it would outweigh the positive impacts of Probably Goodâjust raising it as something worth thinking about.)
To illustrate by taking it to an extreme, if 9 out of 10 people one met in the EA community (e.g., at EA events, in Forum discussions) were more like the average current non-EA than the average current EA, it would be a loss less obvious why thereâs an EA community at all, and probably more likely that the community would just dissolve into the broader world, or that more distinctive sub-communities would splinter off.[1] It would also be harder and less motivating to coordinate, find ideas relevant to my interests and plans, get useful feedback from the community, etc. And engaging with the EA community might be less engaging for the sort of potential new members weâd be most excited to have (e.g., people who are thoughtful, open-minded, and passionate about impartial altruism).
This is partly because EAs on average seem to have relatively high levels of some good traits (e.g., desire to have an impact, thoughtfulness, intelligence), and to that extent this is somewhat uncomfortable and smacks of elitism. But itâs also partly just because in general communities may coordinate and hang together better and longer if itâs more clear what their purpose/âfocus is. (E.g., I think the current members of a randomly chosen hobby club would enjoy that club less if it got an influx of new members who were much less keen on that hobby than the current members.)
I think this risk could arise even if the set of cause areas is still basically only ones supported by a large portion of current EAs, if this org made it much more likely weâd get lots of new EAs who:
chose these areas for relatively random reasons, and/âor
arenât very thoughtful about the approach theyâre taking within the cause area
E.g., they decided to address the problem through a particular type of job before learning more about the real nature of the problem, and then donât re-evaluate that decision or listen to feedback on that, and just want advice on precisely how to approach that job or what org to do it at
Or the risk could arise as a result of substantial portions of the EA community being scattered across a huge range of cause areas, or a range thatâs not that huge but includes some areas that are probably much less pressing (in expectation). (To be clear, I think there are benefits to EA already representing a variety of cause areas, and I like that about the communityâbut I think there could be more extreme or less thoughtful versions of that where the downsides would outweigh the benefits.)
Iâd be interested to hear whether you think that risk is plausible for an initiative roughly like yours in general, and for your org in particular, and whether you have thoughts on how you might deal with it.
(It seems plausible that there are various ways you can mitigate this risk, or reasons why your current plan might already mostly avoid this risk.)
[1] I think my thinking/âphrasing here might be informed by parts of this SSC post, as I read that recently. That said, I canât recall if that post as a whole supports my points.
For the sake of clarity Iâll restate what I think you meant:
Weâre not discussing the risk of people taking less impactful career paths than they would have taken counterfactually because we existed (and otherwise they might have only known 80k for example). That is a risk we discuss in the document.
Weâre talking specifically about âmembershipâ in the EA community. That people who are less committed \ value aligned \ thoughtful in the way that EAs tend to be \ something elseâwould now join the community and dilute or erode the things we think are special (and really good) about our community.
Assuming this is what you meant, Iâll write my general thoughts on it:
1. The extent to which this is a risk is very dependent on the strength of the two appearances âveryâ in your sentence âA career org that (1) was very broad in its focus, and/âor very accepting of different viewsâ. While weâre still working out what the borders of our acceptance are (as I think Sella commented in response to your question on agnosticism), weâre not broadening our values or expectations to areas that are well outside the EA community. I donât currently see a situation where we give advice or a recommendation that isnât in line with the community in general. Itâs worth noting that the scope and level of generality that the EA community engages with in most other interactions (EA Global, charity evaluation orgs, incubation programs, etc.) is much broader than 80Kâs current focus. We see our work as matching that broader scope rather than expanding it, and so we donât believe weâre changing where EA stands on this spectrumâsimply applying it to the career space as well.
2. More importantly, even in cases where we could make a recommendation that (for examples) 80k wouldnât stand behindâour methodology, values, rigor in analysis, etc. should definitely be in line with what currently exists, and is expected, in the community. I canât promise we wonât reach different conclusions sometimes, but I wonât be âacceptingâ of people who reach those conclusions in shoddy ways.
3. This is a relatively general point, but itâs important and it mitigates a lot of our risks: In the next few months, weâre not planning to grow, do extensive reaching out, market or try to bring a lot of new people in. Thatâs explicitly because we want to create content and start working, do our best to evaluate the risks (with the help of the community) - and only start having a large impact once weâre more confident in the strength and direction of that impact.
In a sense (unless we fail pretty badly at evaluating in a few months) - weâre risking a very small harm of a small unknown org and potentially gaining the benefits that could be quite large if we do find that our impact looks good.
I think this does basically remove the following potential worry I pointed to:
But itâs not clear to me that it removes this worry I pointed to:
You do also say âI wonât be âacceptingâ of people who reach those conclusions in shoddy ways.â But this seems at least somewhat in tension with what seem to be some key parts of the vision for the organisation. E.g., the Impact doc says:
Some people who arenât ready yet to accept aspects of EA like cause-neutrality might indeed become ready for that later, and that does seem like a benefit of this approach. But some might just continue to not accept those aspects of EA. So if part of your value proposition is specifically that you can appeal to people who are not currently cause-neutral, it seems like that poses a risk of reducing the proportion of EAs as a whole who are cause-neutral (and same maybe goes for some other traits, like willingness to reconsider oneâs current job rather than cause area).
To be clear, I think climate change is an important area, and supporting people to be more impactful in that area seems valuable. But here youâre talking about someone who essentially âhappens toâ be focused on climate change, and doesnât accept cause neutrality. If everyone in EA was replaced by someone identical to them in most ways, including being focused on the same area, except that they were drawn to that area somewhat randomly rather than through careful thought, I think thatâd be a less good community. (I still think such people can of course be impactful, dedicated, good peopleâIâm just talking about averages and movement strategy, and not meaning to pass personal judgement.)
Do you have thoughts on how to resolve the tension between wanting to bring in people who arenât (yet) cause-neutral (or willing to reconsider their job/âfield/âwhatever) and avoiding partially âerodingâ good aspects of EA? (It could be reasonable to just say youâll experiment at a small scale and reassess after that, or that you think that risk is justified by the various benefits, or something.)
This is something we discussed at length and are still thinking about.
As you write in the end, the usual âIâll experiment and seeâ is true, but we have some more specific thoughts as well:
I think thereâs a meaningful difference between someone who uses âshoddyâ methodology to someone whoâs thoughtfully trying to figure out the best course of action and has either not got there yet or still didnât overcome some bad priors or biases. While Iâm sure there are some edge cases, I think most cases arenât on the edge.
I think most of our decisions are easier in practice than in theory. The content weâll write will be (to the best of our ability) good content that will showcase how we (and the EA community) believe these issues should be considered. 1:1s or workshops will prioritize people we believe could benefit and could have a meaningful impact, and since we expect us to not live up to demand any time soon, I doubt weâll have to consider cases that seem detrimental to the community. Finally, our writing, while aiming to be accessible and welcoming to people with a wide variety of views, will describe similar thought processes and discuss similar scopes to the broader EA community (albeit not 80K). As a result, I think it will be comparable to other gateways to the community that exist today.
The above point makes the practical considerations of the near future simpler. It doesnât mean that we donât have a lot to think, talk through and figure out regarding what we mean by âAgnostic EAâ. Thatâs something that we havenât stopped discussing since the idea for this came up and I donât think weâll stop any time soon.
Thanks for that response! I think you make good points.
Yes, I think youâve captured what I was trying to say.
I should perhaps clarify that I didnât mean to imply that this risk was very likely from your particular org, or that it the existence of this risk means you shouldnât try this. I agree in particular that your point 3 is important and mitigates a lot of your risks, and that thereâs high information value and low risks from trying this out without yet doing extensive marketing etc.
I was essentially just wondering whether youâd thought about that risk and how you planned to deal with it :)