I find myself pretty confused about how to think about this. Numerically, I feel like the level we’re advising is at most top 3%, and probably more like top 1%ish?
Some considerations that are hard for me to think through:
The current allocation and advice given by career EAs is very strongly geared towards very specific empirical views of a) the target audience of who we actually talk to/advise, b) what the situation/needs of the world looks like (including things like funding vs talent overhangs), and c) what we currently know about and are comfortable talking about. So for example right now the advice is best suited for the top X%, maybe even top 0.Y%, of ability/credentials/financial stability/etc. This may or may not change in 10-20 years.
And when we give very general career advice like “whether you should expect to have more of an impact through donations or direct work”, it’s hard to say something definitive without forecasts 10-20 years out.
The general point here is that many of our conclusions/memes are phrased like logical statements (eg claims about the distributions of outcomes being power-law or w/e), but they’re really very specific empirical claims based on the situation as of 2014-2021
Are you (and others) including initiative when you think about ability? This is related to smarts (in terms of seeing opportunities) and work ethic (in terms of pulling through on seizing opportunities when they happen), but it feels ultimately somewhat distinct.
When I think about EA-aligned ex-coworkers at Google, I’d guess ~all of them are in the top 3% for general ability (and will be in a higher percentile if you use a more favorable metric like programming ability or earning potential). But I’d still guess most of them wouldn’t end up doing direct work, for reasons including but not limited to starting new projects etc being kind of annoying.
Like I think many of them can do decent work if EA had a good centralized job allocation system and they are allocated to exactly the best direct work fit for them, and a decent subset of them would actually sacrifice their comfortable BigTech work for something with a clear path to impact, but in practice <<50% of them would actually end up doing direct work that’s more useful than donations in the current EA allocation.
The current composition of the EA community is incrediblyweird, even by rich-country standards, so most of us have a poor sense of how useful our thoughts are to others
As a sanity check/Fermi, ~60k (~0.2%) of college-aged Americans attend Ivy League undergrad, you get ~2x from people attending similar tiered universities (MIT/Stanford/UChicago etc), and ~2-3x from people of similar academic ability who attended non-elite universities, plus a small smattering of people who didn’t go to university or dropped out, etc.
This totals to ~1% of the general population, and yet is close to the average of the EA composition (?).
My guess is that most EAs don’t have a strong sense of what the 97th percentile of ability in the population looks like, never mind the 90th.
Reasons why I think the cutoff might in practice be higher:
Because EA is drawn from a fairly tail-end of several distributions, we might overestimate population averages?
As you’ve noted, the cutoff for specific professions we recommend seems much higher than top 3% for that profession. For an example of something a bit outside the current zeitgeist, I think a typical Ivy League English major would not be very competitive for journalism roles (and naively I’d guess journalism to be much more of a comparative advantage for Ivy League English majors than most other roles)
Obviously you can be top X% in general and top 0.Y% in specific professions, but I’m not sure there are enough “specific professions” out there where people can have a large enough impact to outweigh earning to give.
(Note that I’m not saying that you need to have attended a elite college to do good work. Eg Chris Olah didn’t do college, Eliezer Yudkowsky didn’t finish high school. But I think when we these sorts of claims, we’re saying some people are overlooked/not captured by the existing credentialing systems, and their general ability is on par or higher than the people who are captured by such systems, and ~1% of total population of Ivy League-equivalents is roughly where my Fermi lands)
I feel like quite a few talented people well within the top 3% or even top 1% in terms of assessed general ability fail to do impactful direct work (either within or outside of the EA community), so the base rates aren’t looking super hot?
Reasons why I think the cutoff might in practice be lower:
I guess in every “elite” community I’m tangentially a part of or heard of, there’s just a very strong incentive to see yourself as much more elite than you actually are, based on insufficient evidence or even evidence to the contrary.
So I guess in general we should have a moderate prior that we’re BSing ourselves when we think of ourselves (whether EA overall or direct work specifically) as especially elite.
Our advice just isn’t very optimized for a population of something like “otherwise normal people with a heroic desire to do good.” I can imagine lots and lots of opportunities in practice for people who aren’t stellar at eg climbing bureaucracies or academic talent, but willing to dedicate their lives to doing good.
On balance I think there are stronger factors pushing the practical cutoff to be higher rather than lower than top 3%, but I’m pretty unsure about this.
I think I agree that the cutoff is if anything higher than top 3% which is why I said originally ‘at best’. The smaller that top number is the more glaring is the oversight not to mention this explicitly everytime we have conversations on this topic.
I have been thinking about the initiative bit, thank you for bringing it up. It seems to me that ability and initiative/independentmindedness somewhat tradeoff against each other, so if you are not on the top 3% (or whatever) for ability, you might be able to still have more impact through direct work than donations with a lot of initiative. Buck argues along these lines in his post on doing good through non-standard EA career paths.
That would also be my response to ‘but you can work in government or academia’. As soon as “impact” is not strictly speaking in your job description and therefore your impact won’t just come from having higher aptitude than the second best candidate, you can possibly do a lot of good by showing a lot of initiative.
The same can be said re. what Jonas said below:
I’m also thinking that there seem to be quite a few exceptions. E.g., the Zurich ballot initiative I was involved in had contributors from a very broad range of backgrounds. I’ve also seen people from less privileged backgrounds make excellent contributions in operations-related roles, in fundraising, or by welcoming newcomers to the community. I’m sure I’m missing many further examples. I think these paths are harder to find than priority paths, but they exist, and often seem pretty impactful to me.
If you are good at initiative you are maybe able to find the high impact paths which are harder to find than the priority paths and “make up” for lower ability this way.
I find myself pretty confused about how to think about this. Numerically, I feel like the level we’re advising is at most top 3%, and probably more like top 1%ish?
Some considerations that are hard for me to think through:
The current allocation and advice given by career EAs is very strongly geared towards very specific empirical views of a) the target audience of who we actually talk to/advise, b) what the situation/needs of the world looks like (including things like funding vs talent overhangs), and c) what we currently know about and are comfortable talking about. So for example right now the advice is best suited for the top X%, maybe even top 0.Y%, of ability/credentials/financial stability/etc. This may or may not change in 10-20 years.
And when we give very general career advice like “whether you should expect to have more of an impact through donations or direct work”, it’s hard to say something definitive without forecasts 10-20 years out.
The general point here is that many of our conclusions/memes are phrased like logical statements (eg claims about the distributions of outcomes being power-law or w/e), but they’re really very specific empirical claims based on the situation as of 2014-2021
Are you (and others) including initiative when you think about ability? This is related to smarts (in terms of seeing opportunities) and work ethic (in terms of pulling through on seizing opportunities when they happen), but it feels ultimately somewhat distinct.
When I think about EA-aligned ex-coworkers at Google, I’d guess ~all of them are in the top 3% for general ability (and will be in a higher percentile if you use a more favorable metric like programming ability or earning potential). But I’d still guess most of them wouldn’t end up doing direct work, for reasons including but not limited to starting new projects etc being kind of annoying.
Like I think many of them can do decent work if EA had a good centralized job allocation system and they are allocated to exactly the best direct work fit for them, and a decent subset of them would actually sacrifice their comfortable BigTech work for something with a clear path to impact, but in practice <<50% of them would actually end up doing direct work that’s more useful than donations in the current EA allocation.
The current composition of the EA community is incredibly weird, even by rich-country standards, so most of us have a poor sense of how useful our thoughts are to others
As a sanity check/Fermi, ~60k (~0.2%) of college-aged Americans attend Ivy League undergrad, you get ~2x from people attending similar tiered universities (MIT/Stanford/UChicago etc), and ~2-3x from people of similar academic ability who attended non-elite universities, plus a small smattering of people who didn’t go to university or dropped out, etc.
This totals to ~1% of the general population, and yet is close to the average of the EA composition (?).
My guess is that most EAs don’t have a strong sense of what the 97th percentile of ability in the population looks like, never mind the 90th.
Reasons why I think the cutoff might in practice be higher:
Because EA is drawn from a fairly tail-end of several distributions, we might overestimate population averages?
As you’ve noted, the cutoff for specific professions we recommend seems much higher than top 3% for that profession. For an example of something a bit outside the current zeitgeist, I think a typical Ivy League English major would not be very competitive for journalism roles (and naively I’d guess journalism to be much more of a comparative advantage for Ivy League English majors than most other roles)
Obviously you can be top X% in general and top 0.Y% in specific professions, but I’m not sure there are enough “specific professions” out there where people can have a large enough impact to outweigh earning to give.
(Note that I’m not saying that you need to have attended a elite college to do good work. Eg Chris Olah didn’t do college, Eliezer Yudkowsky didn’t finish high school. But I think when we these sorts of claims, we’re saying some people are overlooked/not captured by the existing credentialing systems, and their general ability is on par or higher than the people who are captured by such systems, and ~1% of total population of Ivy League-equivalents is roughly where my Fermi lands)
I feel like quite a few talented people well within the top 3% or even top 1% in terms of assessed general ability fail to do impactful direct work (either within or outside of the EA community), so the base rates aren’t looking super hot?
Reasons why I think the cutoff might in practice be lower:
I guess in every “elite” community I’m tangentially a part of or heard of, there’s just a very strong incentive to see yourself as much more elite than you actually are, based on insufficient evidence or even evidence to the contrary.
So I guess in general we should have a moderate prior that we’re BSing ourselves when we think of ourselves (whether EA overall or direct work specifically) as especially elite.
Our advice just isn’t very optimized for a population of something like “otherwise normal people with a heroic desire to do good.” I can imagine lots and lots of opportunities in practice for people who aren’t stellar at eg climbing bureaucracies or academic talent, but willing to dedicate their lives to doing good.
On balance I think there are stronger factors pushing the practical cutoff to be higher rather than lower than top 3%, but I’m pretty unsure about this.
I think I agree that the cutoff is if anything higher than top 3% which is why I said originally ‘at best’. The smaller that top number is the more glaring is the oversight not to mention this explicitly everytime we have conversations on this topic.
I have been thinking about the initiative bit, thank you for bringing it up. It seems to me that ability and initiative/independentmindedness somewhat tradeoff against each other, so if you are not on the top 3% (or whatever) for ability, you might be able to still have more impact through direct work than donations with a lot of initiative. Buck argues along these lines in his post on doing good through non-standard EA career paths.
That would also be my response to ‘but you can work in government or academia’. As soon as “impact” is not strictly speaking in your job description and therefore your impact won’t just come from having higher aptitude than the second best candidate, you can possibly do a lot of good by showing a lot of initiative.
The same can be said re. what Jonas said below:
If you are good at initiative you are maybe able to find the high impact paths which are harder to find than the priority paths and “make up” for lower ability this way.