On Elitism in EA
Introduction
Elitism often gets a bad rap. Its role in EA is complicated, and though it sometimes leaves a bad taste in our mouth, we think that elitism is better understood in shades of gray rather than black and white. In this post, we’ll look at when elitism can be useful in EA and when it can be detrimental. Hopefully, a more nuanced understanding of elitism and its benefits/drawbacks can lead to a more productive conversation around its place in community building.
A closer look
Elitism in EA usually manifests as a strong preference for hiring and funding people from top universities, companies, and other institutions where social power, competence, and wealth tend to concentrate. Although elitism can take many other forms, for our purposes, we’ll be using this definition moving forward.
We’ve categorized several traits in the following table by whether they’ll be selected for during prestigious recruiting/hiring processes, or whether they’ll be independent/selected against. Feel free to propose edits or other traits in the comments! We’ve found this table useful for thinking through situations when elitism may or may not be appropriate.
Traits that elitism tends to select for | Traits that elitism tends to select against (or neutral) |
- Ambition/desire for power - Problem-solving - Self-motivation and self-regulation - Academic/intellectual competence - Possession of social power - Access to resources | - Altruism/desire to help others - Agency/agentic-ness - Critical thinking - Risk-taking/rebelliousness (e.g. choosing safer career options like finance, medicine, Big Tech) |
Pros of elitist selection
1. Talent
Prestigious programs select for a baseline of traits EA generally looks for. Making something more elite in an academic context will draw in competent and ambitious talent. Given that impact is heavy-tailed, there are often several orders of magnitudes difference between the expected impact of median vs top percentile talent[1]
2. Class and social selection
Elite selection (e.g. at top universities) will often select for people who have a baseline of financial stability. EA careers aren’t as stable as alternative career pathways for most people (e.g. teacher, doctor, researcher), and financial stability is an important prerequisite to getting more involved. It’s far easier to consider earning to give if you’re making $100k+ a year.
Edit: It’s challenging for students/workers from even middle-class families to give up several hours a week to preventing a far-off risk while struggling to pay off $50,000+ in student loans and support their families.
Cons of elitist selection
1. Optics and Demotivation
As EA becomes more mainstream, we should be careful with how EA’s image grows. An elitist reputation may kneecap recruitment efforts and the movement’s impact. Additionally, internal perceptions of EA may turn sour as conferences and organizations start being viewed as “only for the elite” vs. “open-to-all.” It can be incredibly demotivating being told that your potential for impact is far less than a select few.
2. Epistemics and Homogeneity
Recruiting from the same 10-20 universities who all have similar demographics makes it more likely to end up engaging in groupthink. This is problematic since novel and creative solutions are in high demand. Lack of diversity is also already a problem that EA struggles with, and promotes a self-perpetuating cycle.
3. Altruism
Prestige doesn’t select for people who want to do the most good. This can be counteracted by recruitment processes that select more heavily for altruism and the self-selection effects of EA as a movement, but given the importance of strong value-alignment within EA, this is potentially damaging in the long-term.
4. Practicality
Elitist selection will miss great people who haven’t had access to these elite institutions and environments. It might pass over the weird and non-traditional candidates, the ones who might be able to make the largest impact. It doesn’t really select for traits that we might want, such as risk-aversion, contrarianism, and agency. At the same time, no selection process is perfect, and it all depends on the specific situation and traits that we want to select for.
When is elitism appropriate?
There are many situations in which elitist recruitment processes can be instrumentally useful. Here are a few examples:
Senior-level positions
In these cases, elitism can be beneficial because specialized competence and leadership are key to being successful in senior roles (e.g. CEOs, senior AI engineers, research leads) and these are traits highly correlated with elite environments.
Cofounder searches
Many of the points in the previous example also apply here. The initial team of an organization often makes or breaks it, so this team should be especially competent, cohesive, self-driven, and well-financed. Having a strong network of elite connections and access to resources is also crucial for ambitious ventures.
On the other hand, things are more complicated when we consider the type of organization that is hiring and the traits they are looking for. For new AI research labs, selecting for prestige probably correlates with finding good cofounders. If the organization is pursuing a riskier approach, where technical expertise is less necessary, elitism might select against the kind of people you’re looking for.
University groups at early stages
Early stage university groups seem to follow a principle of recruiting every new member to become an exec. Though useful for the short-term, it potentially sacrifices the group’s effectiveness in the long-term. Disagreements may crop up, and it’s much harder to remove discordous team members after placing them in positions of power.
The initial team also benefits greatly from being highly competent and self-motivated.
“The original Mac team taught me that A-plus players like to work together, and they don’t like it if you tolerate B-grade work.” — Steve Jobs
High-level conferences
Field-specific conferences—such as an AI safety or a biosecurity conference—benefit from restricting the conference to those with expertise. This ensures that everyone in attendance can contribute to the conversations or otherwise will benefit greatly from being exposed to the content.
When is elitism unhelpful?
On the flip side, in many instances being elitist isn’t the best tool for the job, and won’t select for the desired traits:
Project funding and entrepreneurship
Groundbreaking entrepreneurship usually requires a good amount of altruistic instinct, risk-taking, and ability to think outside the status quo of what already exists/is likely to be successful. These traits don’t correlate that much with elite environments, and funding regular people within EA also gives them access to resources they are less likely to have access to compared to people in elite environments.
Entry-level employees
For entry-level positions (e.g. research interns, junior engineers) competence differences between those from elite and non-elite backgrounds matter less. Favoring non-elites at this level also gives them an opportunity to gain experience, which is generally easier for elites to obtain.
EAGx conferences and some EAGs
General conferences are a great place for new people to gain connections and opportunities within EA, and are probably most benefited by people who would otherwise not have opportunities to network with working professionals
Generalist/operations positions
Skills that make people good generalists/managers/assistants are not fully correlated with elite environments. For example, traits such as critical thinking and a sharp intuition are useful for generalists.
Conclusion
Discussions about elitism in EA have tended to refer to it as a trait or a characteristic that should either be built in or phased out of EA culture. This tends to create an all-or-nothing understanding of elitism as a concept. We think that instead framing elitism as a tool that can be useful in some circumstances and anti-useful in others is a more accurate and productive way of tracking how it fits into EA culture and best practices. Sparking more discussion about elitism and the attitudes that community-builders have towards it would be really valuable for mapping the current and future landscape of community building, so any and all comments are much appreciated!
- ^
In EA, there’s a pretty solid correlation between people who have started big and impactful projects and their origins in elite environments (Sam Bankman-Fried, Will MacAskill, Holden Karnofsky, etc.). Some of the most successful companies in the world (e.g. Google, Apple, Paypal) have historically also been quite selective and operate within a sphere of prestige.
Bias alert: I didn’t go to an elite university; I might be sensitive about this.
I think it’s probably bad to tell (very) talented young people from non-elite backgrounds to work at EA jobs, if we are not willing to give non-elites a chance at taking senior-level or leadership positions.
Instead, in such a world, we should probably advise such people to aim for the most prestigious non-EA opportunities that they can get (grad school, FAANG, entrepreneurship, etc), and ask them to come back to EA after they’ve made a name for themselves.
I think a) such advice isn’t terrible but we’re probably leaving a lot of value on the table if orgs can’t have junior non-elites and also a lot of people are aiming at prestige instead of more socially useful forms of career capital, and b) your post explicitly recommends against that for entry-level employees, which I find confusing.
Just spinning off of this, being also someone without an elite school brand to fall back on, I am often hesitant to pursue non-technical work in any capacity and especially so at an EA org due to the nicheness and its lack of legibility.
A decent compromise is technical work at an EA org but even that feels like a gamble sometimes without grinding my way into a FAANG company to establish the aptitude credential first.
Thanks for your comment Linch! We appreciate the feedback.
To clarify, competence and fit are ultimately the most important considerations for a position at the end of the day. We don’t think you should prevent talented young people from non-elite backgrounds from taking on senior-level positions. Our claim is closer to: 1) if you create a prestigious website/application/program then you would get better candidates, and 2) experience at an elite institution is one factor among many that is fairly predictive for average competence/specialization. I’m now more more uncertain about whether 1) is true.
I agree with a) in that explicitly aiming at prestige seems to lose a lot of energy better spent aiming elsewhere. I’m not too sure I understand what you mean by b), but to reiterate our point: we think that it’s basically pointless (and probably detrimental!) to be elitist for entry level positions, given the selection pressures.
Hopefully this clarifies our thinking a bit more, and thanks again for taking the time to share your thoughts!
Thanks for the clarifications! Sorry my wording was confusing!
I’m not sure, but I think I disagree here. I think often you have less signals of demonstrated competency, success, and track record for most people who apply for entry-level positions. So for, e.g., a summer research or journalism internship, it makes sense to (relatively) upweight signals like attending elite universities. Since you can trust that the elite universities did an okay job of filtering your candidates for competency.
In contrast, you might hope that for mid-level and senior positions, people with both elite and non-elite backgrounds have had more time to “prove themselves” in other avenues, so you have more clear signals of ability to do a job (because they might have done past similar jobs), whether in or out of EA. While some of this could look like “prestige” (e.g. top companies as you mentioned in your post), much of it could look like having specific references to vouch for you, or detailed records/impact of specific projects you did.
I think you’re missing the biggest disadvantage of elitism—it makes it so the people deciding on “how to help the world the most” are the people least in need of that help, and least informed about what the world’s problems actually feel like. Thus it makes us much more prone to missing important problems or applying bad solutions (or, worse yet, bad criteria).
Cf. Glen Weyl’s “Why I am not a technocrat”.
Edit: also it’s self-reinforcing. And my conclusion is that it should be very strongly avoided.
So what concretely and scaleably can people who don’t need help because they have lots of resources (and thus they are actually capable of helping) do to figure out what the people who need help actually need, that the EA community is not doing?
For example:
Empower and encourage underrepresented (or unrepresented) people to participate in the movement
Deprioritise recruitment from elite universities, and aim instead to recruit from the least represented communities
Factor involvement of locals into recommendations of intervention implementations
I’d guess Givewell top charities do involve locals a lot in their programs, but I haven’t checked
Be wary of putting lots of resources into projects which may only benefit well-off people. That doesn’t mean we shouldn’t work on X-risk reduction; but it does mean we should give higher priority than we do now to things like:
Acting through governments and democratic processes
Gaining wide public support
Having a wide debate on the threats, the mitigation options, and whether it’s valuable to proceed with them
I’m a bit confused why these examples received s these downvotes. Would someone care to explain what EA is in fact doing in these areas if that is the reason?
I think elitism is overrated, even for senior-level positions and co-founders.
Not coincidentally, I didn’t go to an elite university.
I think people screening on elitism is a lazy heuristic that sometimes works but often times we can do much better.
And after screening potentially 10,000 applicants for over 50 positions at Rethink Priorities—including multiple senior level roles—I don’t notice that traditional markers of elite background (e.g., going to Harvard/Oxford) have any correlation with whether people do well on our test tasks, get hired, and ultimately be successful at Rethink Priorities.
Also with regard to class and social selection, we aim to always pay people enough to afford an American middle class lifestyle regardless of their prior socioeconomic resources. (Our pay is ~$67K/yr USD for entry level, ~$111K for senior level, and ~$122K for director level.)
Have you run data analysis on this? I’m a bit surprised given the hiring rounds I’ve actually seen, though we do have a lot of people with non-elite backgrounds in senior positions. For example, on our website, we have 12 people (including temporary fellows) in Longtermism, and 5 of them come from what I’d consider “elite” universities (2 Yale, Oxford, Cambridge, UChicago), plus two edge cases. So for traditional markers of elite background to have no correlation with how people do on our test tasks, we’d need our pool of job applicants to have ~50% people from elite colleges.
Which isn’t impossible, given the demographics of EA and relevant self-selection of who might want to apply to research EA jobs, but would be surprising to me.
I do think RP is better for people with backgrounds that aren’t traditionally prestigious than many other institutions, possibly due to founder effects.
No specific analysis as we don’t collect this data for our applicants and we don’t judge applicants based on it.
Looking now at this list of top 25 global universities and looking at the backgrounds of our research and executive teams, we are 13⁄41 (32%) for “people attended a top 25 university for any of their education or post-doc work” (5/12 or 42% for people with people management responsibilities).
Maybe that’s actually undercutting my point. But I don’t think say trying to target our recruiting to elite universities or trying to give bonus points to people with elite backgrounds would be better for getting us better hires.
Also I picked my “top 25” cutoff before I saw what data we had on EA as a whole, but it looks like per EA Survey 2019 data we had 22% of EAs as attending a top 20 university (among those for which we gave data for which university they attended).
Preface: There is every possibility that I have misinterpreted or misread the post. Please do let me know if this is the case and I will rescind this comment.
This is an interesting post, thank you for making it. It must have taken a lot of effort, and it’s always a bold move putting culture and community related thought pieces out there because they’re often not ‘safe’ topics. Though I enjoyed many of your past posts, I don’t agree with this one for a couple of reasons.
Firstly, I think the term ‘elitism’ is used too broadly in some areas. Eg. sometimes you mean it to mean cutting out people from different backgrounds, and sometimes a much broader sense. I think it would be helpful to define more clearly what you mean by it. I know you both say “Elitism in EA usually manifests as a strong preference for hiring and funding people from top universities, companies, and other institutions where social power, competence, and wealth tend to concentrate.” but that doesn’t appear to track consistently through the post.
Secondly, some points are contradictory. Eg you state:
But you also state:
This links to my first point in that it’s not clear whether you’re talking about ‘elite environments’ as people who are the best in their field being brought together, or people from supposedly ‘elite’ institutions/organisations. These quotes are from different sections, however, so perhaps I have divorced them from context here.
I see quite frequently in EA discourse that ‘elite’ universities are equated with the best talent. For a community based on rejecting assumptions, it’s quite a leap to make. I’m not saying people from those universities aren’t great (because they are), but there’s an important point to consider.
If we take two of the most ‘elite’ universities in the UK, Oxford and Cambridge, and take but a cursory look around their website we find the following:
Cambridge University
and also
Oxford University
and also
If you then Google the most expensive cities in the UK, you’ll notice some familiar names. Oxford often sits at the #1 spot, in fact.
A trend appears here—and it’s not unnoticed. Research by Cherwell found that the national average for students working part-time was 57%, with 90% of those working as many as 20 hours per week. In Oxford? Only 20% of students are employed and the majority work less than five hours per week.
So how are these funded? There are a variety of scholarships available, but some quick sums on a calculator show these don’t quite add up. Oxford is one of the UK’s most expensive cities, and even on the maximum student loan amount you’re still £2000 - £5000 short each academic year. Cherwell found that to make this up, a student would need to work 850 hours a year—which amounts to full-time for 22 weeks. Essentially, to not only excel but even participate at these universities you need savings—often from family or sponsors. I will say this, because it’s important: many students from low income families do go to these universities and manage to make it work. So if you’re reading this and this is you, don’t be mad. I’m not saying it’s impossible—I’m saying it’s an influence in creating a trend where wealth matters as much as or more than ability.
So for these reasons I would challenge this assumption that elite universities are collections of the best and brightest in the country. They’re a collection of intelligent people who also have the financial ability to go. I would go as far as to say, and I am expecting significant pushback on this, that if anything this is less valuable to draw from as on average these people will have faced fewer obstacles and gained less life experience than equally able peers from different socioeconomic brackets. You mention earlier traits like ‘leadership’ and ‘agency’, and would argue that these are traits more commonly found in those who bootstrap rather than who come from ample opportunity.
It’s entirely possible you didn’t intend ‘elite environments’ to mean this, but perhaps that could do with being cleared up. At any rate, I’ve long posed that drawing from certain universities doesn’t guarantee access to the best talent and that this elitist view robs EA of lots of potentially highly impactful talent. A look at recent history’s most transformative technologies and people doesn’t reinforce the idea that there’s a correlation between university eliteness and capacity for impact.
Again, I reiterate because people can jump to conclusions, I’m not saying elite universities don’t select from highly able people. Because they do. They have higher numbers of more able people there than anywhere else because they can select from a greater pool. I’m saying that they are not made up of the most able people.
So as far as elitism goes, by selecting for the people based on socioeconomic factors, we not only suffer from a lack of intellectual range of thought (as you rightly state in your post), but elitism becomes a self-fulfilling spiral.
Finally, though I appreciated some elements of your post, I honestly struggle to think of any genuine examples where elitism is in any way helpful either for research purposes or within EA. I would very strongly push back against that assumption. One particular part:
”financial stability is an important prerequisite to getting more involved”
I’m not sure how this could be true, unless you’re referencing socioeconomic barriers within EA—in which case removing them would be a tiny fraction of the cost than losing the amount of talent it does. A $100,000+ salary could be useful for earning to give, but in terms of actually interacting with EA and contributing to cause areas, pretty much every most impactful EA I personally know is on well under half of that.
These are just my takes, and there’s every chance people will heartily disagree with me. But that’s what makes our ideas better :) Perhaps a different word might be better—that doesn’t carry the connotations of elitism? Just an idea!
Thanks for taking the time to write this response! We really appreciate the feedback.
A couple of points:
On the first and second point, I agree that we could have been much more rigorous about the specifics of “what we mean by elitism.” We mostly mean elite institutions and organizations, which we used interchangeably with elite environments (e.g. having worked at SpaceX, or having studied at MIT).
Sometimes (maybe even often?), the best in the field won’t be from an ‘elite’ institution (e.g. Ramanujan). I agree that elite institutions =/= best talent. The claim that we’re making is that elite institutions correlate very strongly with fairly great talent depending on the situation. We mention in the post that elite selection can systematically miss very great people, especially for traits like agency or risk-aversion (entrepreneurial types).
“this is less valuable to draw from as on average these people will have faced fewer obstacles and gained less life experience than equally able peers from different socioeconomic brackets.”
I agree that equally able peers from different socioeconomic brackets could likely be better, for many of the reasons you stated. But the question is how to find these peers? If by equally able, you mean that those students attend the same institutions and the only difference is that they are from a lower socioeconomic bracket, we don’t disagree.
”You mention earlier traits like ‘leadership’ and ‘agency’”
It’s hard to speak about these things without concrete numbers, and there’s no doubt that leadership is also formed in people without access to elite environments. On agency, I agree with you. We explicitly mentioned it as a trait that isn’t correlated much with elite environments.
On the last point, I’ve clarified our point and edited the original post. The claim is that people with the affordance to focus a lot of time on EA tend to skew towards people with the privilege to do so. There are certainly many dedicated EAs who haven’t come from places of privilege, and they’ve worked incredibly hard to get to where they are. I think this is awesome! I don’t want to discount any of this. But often, the foundational and basic needs need to be satisfied (e.g. financial, time, etc).
As a final note, I want to emphasize that people from non-elite institutions can, and often do amazing work. Elite institutions don’t ensure the “best” people, or even “better people”, simply a baseline of fairly competent people depending on what your situation is.
Thanks again for writing up your comment! We really want to encourage discussion around topics that are often “taboo” but important and widely present in the movement.
I feel conflicted about this. On the one hand, would I rather hire and work with more smart, clever, bright, caring people? Yes, definitely. But if those are the values you care about, how do you select for those values? If we use pedigree and access to elite institutions as a filter, then it seems that we are heavily just selecting for access to those institutions, which is often not indicative of the values we actually care about.[1]
People who have access to top universities, companies, and other institutions where social power, competence, and wealth tend to concentrate, are often at those institutions because they are smart and hard-working. But that isn’t the only factor. People are also at these institutions because their grandfather or father was at that institution[2], because the people had wealthy parents who could afford tutoring which boosted test scores, or because they happened to be in the right time at the right place[3]. Both with companies and with universities, acceptance into the institution ought to be meritocratic, but it has large parts that consider factors other than merit.
For those of us that have connections to elite schools or elite companies, we know that there are plenty of people at these institutions that aren’t so bright.
If we describe elitism as selecting for the best and the brightest (trying to put aside the historical baggage that phrase has), then that sounds meritocratic and I like that idea, even though it is hard to implement. If we describe elitism as selecting for people that have access to selective institutions, then I am not a fan.
I haven’t read any research on the Cohen’s D or on the correlation between “attended Ivy league school” and those other nebulous values (smart, clever, bright, caring), but I would be very interested to see it if it exists. My assumption is that there is a weak positive trend, but I’d prefer to have real data than merely going on my assumptions.
A cursory Wikipedia search reveals that “between 2014 and 2019, Harvard University accepted legacy students at a rate of 33%—more than five times higher than its overall acceptance rate during this period of 6%.”
How many of us have stories of getting a particular job or making a particular connection not due to the results of our effort or due to our being abnormally bright, but simply due to happenstance?
This post makes me uncomfortable. I think elitism is straightforwardly pretty bad, and it should be discouraged in EA. To be clear, what I strongly object to is a preference for hiring/funding people from elite institutions. If EA orgs hire/fund people in a way that’s blind to their backgrounds and they end up hiring/funding disproportionately many people from elite institutions, that’s more complicated.
Why is elitism bad?
-I have strong emotional, not-really-utilitarian intuitions that it’s bad to not give people a fair chance because of their background
-elite institutions maybe select for competence and intelligence, but highly imperfectly—many competent/intelligent people have no association with elite institutions, and many people at elite institutions are there because of privilege or having learnt how to blag and bluff very well, rather than competence. (I’ve attended 2 elite universities, so I know this to be true, lol!) I think there are just many more things hirers/funders could do to work out who’s the best fit.
-your 2nd ‘pro’ - that financial stability is necessary for getting involved—could actually be a con in some areas. If a project has to receive funding from the community, that’s obviously more (financially) costly for the community, but the advantage is that the project has passed some quality filter—someone has decided to fund it, so it’s more likely to be good. Whereas if someone is self-funding, they might be doing something that is misguided. E.g., I know a couple of people who have got FTX grants to do projects. Without that grant, would have found it hard to quit their day job and do the project. I think (a) it’s good that they got this funding, and (b) the fact that they got the grants is a strong vote of confidence in them/their project.
To be clear, grantmaking of this kind also has flaws. I just think it would be bad if the only people heading up projects in EA were people who happened to be wealthy, either because of their family background or because they happened to have the skills/inclination to get a high-paying job.
Thanks for the post. Your post/chart is exactly what I have been thinking about and recently, and glad to see someone started the conversation in 2022.
What do you mean by competence? Is it the skills, knowledge, connections, and presentation that advance these institutions? Does the advancement include EA-related innovation? Is this competence generalizable to EA-related projects?
Is social power the influence over acceptable norms due to representing that institution or having an identity that motivates others to make a mental shortcut for such ‘deference to authority’? Could social power be gained without appealing to traditional power-related biases?
Critical thinking in solving problems related to achieving the institutions’ objectives are supported while critical engagement with these objectives may be deselected against. This also implies that no one thinks about the objectives, which can be boring/make people feel lacking meaning: companies could be glad to entertain conversations about the various possible objectives.
Effective altruism—desire to help others the most while valuing all, even those outside of one’s immediate circles, more equally. Elite decisionmaking is to an extent based on favors and dynamics among friends and colleagues.
I’d say acceptance/internalization of the specific traditional hierarchical structure and understanding oneself as competent to progress within this structure.
I am assuming that you are assuming the ‘eliteness’ metric as a sum of school name, parents’ income, and Western background? Please reduce my bias.
Is the correlation apparent? For example, imagine that instead of (elite) Rob Mather gaining billions for a bednet charity a (non-elite) thoughtful person with high school education and $5/day started organizing their (also non-elite) friends talking about cost-effective solutions to all issues in sub-Saharan Africa in 2004 and was gaining the billions since, as solutions were developed. Maybe, many more problems would have been solved better.
Counter-examples (started big and impactful projects from non-elite background) may include Karolina Sarek, William Foege (Wiki), and Jack Rafferty. It can be interesting to see this percentage in the context of the % of elite vs. non-elite people in EA (%started impactful projects from elite/%elite in EA)/(%started impactful projects from non-elite/%non-elite in EA). Further insights on the relative success of top vs. median elite talent can be gained by controlling for equal opportunities (which can be currently assumed if funding is awarded on the basis of competence).
So, while EA was funding constrained, it used to make sense to attract elites. Now, this argument applies to a lesser extent.
Unless it is true, such as if impact is interpreted as representing an institution that aspires for normative change, in which case you realize that speaking with elite people in an elite way is not really for you anyway and do something else, such as running projects or developing ideas. This is an equal dynamic where potential for impact is a phrase.
Thinking diversity norms can be more influential in having vs. not having issues with groupthink than the composition of the group, considering that people interact with others. For example, if the norm is prototyping solutions with intended beneficiaries, engaging them in solving the issues and stating their priorities in a way which mitigates experimenter bias and motivates thoughtful sincerity, and considering a maximally expanded moral circle, then the quality of solutions should not be reduced if people from only 10-20 schools are involved. On the other hand, if the norm is, for instance, that everyone reads the same material and is somewhat motivated to donate to GiveWell and spread the word, then even a diverse group engages in groupthink.
Prestige selects for people of whom the highest share wants to do the most good when being offered reasoning and evidence on opportunities, at least if prestige is interpreted as such. Imagine, for instance, a catering professional being presented with evidence on doing the most good by vegan lunches. Their normative background may not much allow for impact consideration if that would mean forgone profit, unless it does. If EA should keep value by altruistic rather than other (e. g. financial) motivation, then recruitment should attract altruistic people who want to be effective and discourage others.
So, it depends on the senior-level positions. If you want to make changes in an authoritarian government, an (elite) insider will be very helpful. Similarly, a (non-elite) insider would be helpful if they need to develop solutions within a non-elite context, such as solve priorities in Ghana under $100m. It does not matter if normative solution developers (such as AI strategy researchers) are elite or not, as long as they understand and equally weigh everyone’s interests. Positive discrimination for roles that elites may have better background in (e. g. due to specialized school programs), such as technical AI safety research, may be counterproductive to the success of the area, because less competent people would lead the organizations and since the limited number of applicants from non-elite roles is not caused by unwelcomingness but limited opportunities to develop background skills, positive discrimination would not further increase diversity.
Complementarity can be considered. For example, someone who can find the >$100m priorities in Ghana and someone who can get the amount needed. However, own network funding can also prevent the entire network fund a much better project in the future, so not all elite people should be supported in advancing their own projects, since there is so relatively many elites and so few elite networks—unless offering an opportunity to fund a relatively less unusual project first enables the support of a more unusual (and impactful) project later. If the project objective is well-defined and people receive training, then anyone who can understand the training and will make sure that it gets done can qualify.
You are grading ‘playing with Macs.’ I think Bill Gates dropped out of college. And, just based on these two examples—if you compare their philanthropy … This means that whoever is not cool cannot participate? Also, if students get used to upskilling others (and tolerating or benefiting from that), then EA can get less skills-constrained later and create more valuable opportunities for the engagement of people who score around the 70th(95th) percentile on standardized exams.
While a biosecurity conference should probably only ‘benefit’ people who are ‘vetted’ by elite (if so defined) institutions that they will not actually think about making pathogens since biosecurity is currently relatively limited, an AI safety conference can be somewhat more inclusive in including ‘possibly risky’ people. This assumes that making an unaligned superintelligence is much more difficult than creating a pathogen.
AI safety conferences should exclude people who would make the field non-prestigious/without the spirit of ‘the solution to a great risk,’ for example, seem like an appeal of online media users for the platforms to reduce biases in the algorithms because they are affecting them negatively. Perhaps even more so than one’s elite background, the ability to keep up that spirit can be correlated with traditionally empowered personal identity (such as gender and race) and internalization of these norms of power (rather than critical thinking about them). Not everyone with that ability of ‘upholding a unique solution narrative’ must be from that demographic and not everyone has to have this ability in that group (only a critical mass has to). This applies as long as people negatively affected by traditional power structures perceive a negative emotion which would prevent them from presenting objective evidence and reasoning to decisionmakers.
So everything except community building and entry-level employment? Should there be community building in non-elite contexts (while elites (in some way) within or beyond these contexts may or may not be preferred)? A counterargument is similar to the AI safety ‘spirit’ one above: people would be considered suffering by disempowerment and thus appeal less effectively and to your standards one: people who would slack with Bs in impact would just be ok with some problems unresolved. Arguments for include epistemic, problem awareness, and solution-relevant insights diversity and facilitating mutually beneficial cooperation (e. g. elites gain the wellbeing of people who have more time for developing non-strategic relationships and non-elites gain the standards of perfecting solutions), in EA and as project outcomes.
It may depend on the org. Some orgs (e. g. high-profile fundraising) that generally prefer people from elite backgrounds can prefer them also for entry-level positions. This can be accounting for the ‘As are disgraced by Bs and would not do a favor for them since they do not gain acknowledgement from other As but can be perceived as weak or socially uncompetitive’ argument of the ‘target audiences’ of these orgs.
If doing nothing and waiting for social norms to change is appropriate, non-elites should excluded from these entry-level roles. The org can actively change the norms by training non-elites to resemble elites (which can be suboptimal due to exhibiting the acceptance of the elite standard, which is (thus) exclusive) or by accepting anyone who can make the target audience elites realize that their standard is not absolute. In that case, the eliteness of one’s background should not contribute to hiring decisions.
Depending on the attitude of the key decisionmakers at EAGs/EAGxs, such as large funders, eliteness should be preferred, not a selection criterion, or dis-preferred. It is possible that anyone who demonstrates willingness and potential to make high impact can be considered elite in this context.
Is it that elites have less sharp intuition than non-elites? An argument for is that elites are in their positions because they reflect the values of their institution without emotional issues, which requires the reduction of one’s intuitive reasoning. If an institution values critical thinking, gaining information from a diversity of sources, and forming opinions without considerations of one’s acceptance in traditional hierarchies, then elites can develop intuition.