Thanks for this! I agree that the case for working at an EA org. seems less clear if you have already established career capital in a field or organization.
Regardless, the most important crux here is this belief
I think having 1% of humanity lightly engaged in EA-related activities is more valuable than having 0,0001% deeply engaged.
The necessity of EA alignment/engagement is an enduring question within movement-building. Perhaps the most relevant version of it right now is around AI safety: I know several group organizers who believe that a) AI is one of the most important causes and b) EA alignment is crucial to being able to do good alignment work, which means that it’s more important to get the right % of humanity deeply engaged in EA activities.
Another way of framing this is that impact might be heavy-tailed: that is, ~most of the impact might come from people at the very tail-end of the population (e.g., people who are deeply engaged in EA). If that were true, then that might mean that it’s still more impactful to deeply engage a few people than to shallowly engage many people.
I guess that the people who are likeliest to believe that impact is heavy-tailed would also prioritize x-risk reduction (esp. from AI) the most, which would also reduce their perception of the impact of earning-to-give (because of longtermism’s funding situation, as you note). I’m not sure that those kinds of group organizers would agree that they should prioritize activities that promote ‘shallow’ EA engagement (e.g., local volunteering) or high-absorbency paths (e.g. earning-to-give), because it’s plausible that the marginal impact of deeper engagement outweighs the increased exposure.
But none of this contravenes your overall point that for some individuals, the most marginally impactful thing they could do may not be to work at an EA org.
I think having 1% of humanity lightly engaged in EA-related activities is more valuable than having 0,0001% deeply engaged.
I agree that this is the crux, but I don’t think it’s an either-or scenario. I guess the question may be how to prioritize recruiting for high priority EA jobs, while also promoting higher-absorbency roles to those that can’t work in the high priority ones.
Being “mediocre, average, or slightly above average” is not always going to be a permanent state. People develop career capital and skills, and someone who isn’t a good fit for a priority role out of university (or at any particular moment), may become one over time. Some of Marc’s suggestions could be thought of as stepping stones (he mentioned this in a few places, but it seems worth calling out).
Related to that, the EA jobs landscape is going to change a lot in the next few years as funding pours in, and projects get started and need to staff-up. It seems worthwhile to keep the “collateral damage” engaged and feeling like a part of the EA community, so that they can potentially help fill the new roles that are created.
This is a really good point, thank you for adding important nuance! I think coordination within the EA community is important for ensuring that we engage + sustain the entire spectrum of talent. I’d be keen for people with good fits* to work on engaging people who are less likely to be in the ‘heavy-tail’ of impact.
*e.g., have a strong comparative advantage, are already embedded in communities that may find it harder to pivot
I also have a strong reaction to Marc’s “collateral damage” phrase. I feel sad that this may be a perception people hold, and I do very much want people to feel like they can contribute impactfully beyond mainstream priority paths. I think this could be partly a communication issue, where there’s conflation between (1) what the [meta-]EA community should prioritize next, (2) what the [cause-specific, e.g. x-risk] community should prioritize next,** and (3) what this specific individual could do to have the most impact. My original comment was intended to get at (1) and (2), but acknowledge that (3) can look very different—more like what Marc is suggesting.
**And that’s ignoring that there aren’t clear distinctions between (1) and (2). Usually there’s significant overlap!
I find the claim that people could upskill into significantly more impactful paths to be really interesting. This seems ~related to my belief that far more people than we currently expect can become extremely impactful, provided we identify their specific comparative advantages. I’d be excited for someone to think about potential mechanisms for (a) supporting later-stage professionals in identifying + pivoting higher-impact opportunities and (b) constructing paths for early-career individuals to upskill specifically with a higher-impact path in mind.
I am thinking along similar lines Miranda, and I may have some of that comparative advantage too :)
I don’t like to talk about plans too much before actually getting down to doing, but I am working on a project to find ways to support people coming to EA mid-career/mid-life (as I did). I expect to write a top level post about this in the next few weeks.
The goals are crystalizing a bit:
1. helping to keep people engaged and feeling like a part of the community even if they can’t (aren’t a good fit for, or aren’t yet ready to) consider a high impact career change 2. helping people figure out how to have the most impact in the immediate term, within current constraints 3. helping people work towards higher impact, even if it’s in the longer term
Some ideas for how to do it:
1. compiling and organizing resources that are specifically relevant for the demographic 2. interfacing with EA orgs (80k, local groups, EA Anywhere, Virtual Programs, etc.) in appropriate, mutually beneficial ways 3. peer-based support (because situations mid-career/life vary widely) - IE probably taking the form of a group to start and then hopefully figuring out what kind of 1-on-1 stuff could work too (mentorship, buddy system, etc.)
I guess EA is interested in getting the best and that justifies giving hope to many people who are between OK and almost the best. But that process has some collateral damage. This post is maybe about options to deal with the collateral.
Thanks for this! I agree that the case for working at an EA org. seems less clear if you have already established career capital in a field or organization.
Regardless, the most important crux here is this belief
The necessity of EA alignment/engagement is an enduring question within movement-building. Perhaps the most relevant version of it right now is around AI safety: I know several group organizers who believe that a) AI is one of the most important causes and b) EA alignment is crucial to being able to do good alignment work, which means that it’s more important to get the right % of humanity deeply engaged in EA activities.
Another way of framing this is that impact might be heavy-tailed: that is, ~most of the impact might come from people at the very tail-end of the population (e.g., people who are deeply engaged in EA). If that were true, then that might mean that it’s still more impactful to deeply engage a few people than to shallowly engage many people.
I guess that the people who are likeliest to believe that impact is heavy-tailed would also prioritize x-risk reduction (esp. from AI) the most, which would also reduce their perception of the impact of earning-to-give (because of longtermism’s funding situation, as you note). I’m not sure that those kinds of group organizers would agree that they should prioritize activities that promote ‘shallow’ EA engagement (e.g., local volunteering) or high-absorbency paths (e.g. earning-to-give), because it’s plausible that the marginal impact of deeper engagement outweighs the increased exposure.
But none of this contravenes your overall point that for some individuals, the most marginally impactful thing they could do may not be to work at an EA org.
edit: used “shallowly” twice, incorrectly
I agree that this is the crux, but I don’t think it’s an either-or scenario. I guess the question may be how to prioritize recruiting for high priority EA jobs, while also promoting higher-absorbency roles to those that can’t work in the high priority ones.
Being “mediocre, average, or slightly above average” is not always going to be a permanent state. People develop career capital and skills, and someone who isn’t a good fit for a priority role out of university (or at any particular moment), may become one over time. Some of Marc’s suggestions could be thought of as stepping stones (he mentioned this in a few places, but it seems worth calling out).
Related to that, the EA jobs landscape is going to change a lot in the next few years as funding pours in, and projects get started and need to staff-up. It seems worthwhile to keep the “collateral damage” engaged and feeling like a part of the EA community, so that they can potentially help fill the new roles that are created.
This is a really good point, thank you for adding important nuance! I think coordination within the EA community is important for ensuring that we engage + sustain the entire spectrum of talent. I’d be keen for people with good fits* to work on engaging people who are less likely to be in the ‘heavy-tail’ of impact.
*e.g., have a strong comparative advantage, are already embedded in communities that may find it harder to pivot
I also have a strong reaction to Marc’s “collateral damage” phrase. I feel sad that this may be a perception people hold, and I do very much want people to feel like they can contribute impactfully beyond mainstream priority paths. I think this could be partly a communication issue, where there’s conflation between (1) what the [meta-]EA community should prioritize next, (2) what the [cause-specific, e.g. x-risk] community should prioritize next,** and (3) what this specific individual could do to have the most impact. My original comment was intended to get at (1) and (2), but acknowledge that (3) can look very different—more like what Marc is suggesting.
**And that’s ignoring that there aren’t clear distinctions between (1) and (2). Usually there’s significant overlap!
I find the claim that people could upskill into significantly more impactful paths to be really interesting. This seems ~related to my belief that far more people than we currently expect can become extremely impactful, provided we identify their specific comparative advantages. I’d be excited for someone to think about potential mechanisms for (a) supporting later-stage professionals in identifying + pivoting higher-impact opportunities and (b) constructing paths for early-career individuals to upskill specifically with a higher-impact path in mind.
I am thinking along similar lines Miranda, and I may have some of that comparative advantage too :)
I don’t like to talk about plans too much before actually getting down to doing, but I am working on a project to find ways to support people coming to EA mid-career/mid-life (as I did). I expect to write a top level post about this in the next few weeks.
The goals are crystalizing a bit:
1. helping to keep people engaged and feeling like a part of the community even if they can’t (aren’t a good fit for, or aren’t yet ready to) consider a high impact career change
2. helping people figure out how to have the most impact in the immediate term, within current constraints
3. helping people work towards higher impact, even if it’s in the longer term
Some ideas for how to do it:
1. compiling and organizing resources that are specifically relevant for the demographic
2. interfacing with EA orgs (80k, local groups, EA Anywhere, Virtual Programs, etc.) in appropriate, mutually beneficial ways
3. peer-based support (because situations mid-career/life vary widely) - IE probably taking the form of a group to start and then hopefully figuring out what kind of 1-on-1 stuff could work too (mentorship, buddy system, etc.)
That sounds very exciting. Will be keeping my eyes posted for your post (though I’d be grateful if you could ping me with it when you post, too)!
Will do!
Thank you for explaining it so well.
I guess EA is interested in getting the best and that justifies giving hope to many people who are between OK and almost the best. But that process has some collateral damage. This post is maybe about options to deal with the collateral.