I’m going to be boring/annoying here and say some things that I think are fairly likely to be correct but may be undersaid in the other comments:
EAs on average are noticeably smarter than most of the general population
Intelligence is an important component for doing good in the world.
The EA community is also set up in a way that amplifies this, relative to much of how the rest of the world operates.
Most people on average are reasonably well-calibrated about how smart they are.
(To be clear exceptions certainly exist) EDIT: This is false, see Max Daniel’s comment.
If you’re less smart than average for EAs (or less driven, or less altruistic, or less hardworking, or have less of a social safety net), than on average I’d expect you to be less good at having a positive impact than others.
But this is in relative terms, in absolute terms I think it’s certainly possible to have a large impact still.
Our community is not (currently) set up well to accommodate the contributions of many people who don’t check certain boxes, so I expect there to be more of an uphill battle for many such people.
I don’t think this should dissuade you from the project of (effectively) doing good, but I understand and emphasize if this makes you frustrated.
Most people on average are reasonably well-calibrated about how smart they are.
(I think you probably agree with most of what I say below and didn’t intend to claim otherwise, reading your claim just made me notice and write out the following.)
Hmm, I would guess that people on average (with some notable pretty extreme outliers in both directions, e.g. in imposter syndrome on one hand and the grandiose variety of narcissistic personality disorder on the other hand, not to mention more drastic things like psychosis) are pretty calibrated about how their cognitive abilities compare to their peers but tend to be really bad at assessing how they compare to the general population because most high-income countries are quite stratified by intelligence.
(E.g., if you have or are pursuing a college degree, ask yourself what fraction of people that you know well do not and will never have a college degree. Of course, having a college degree is not the same as being intelligent, and in fact as pointed out in other comments if you’re reading this Forum you probably know, or have read content by, at least a couple of people who arguably are extremely intelligent but don’t have a degree. But the correlation is sufficiently strong that the answer to that question tells you something about stratification by intelligence.)
That is, a lot of people simply don’t know that many people with wildly different levels of general mental ability. Interactions between them happen, but tend to be in narrow and regimented contexts such as one person handing another person cash and receiving a purchased item in return, and at most include things like small talk that are significantly less diagnostic of cognitive abilities than more cognitively demanding tasks such as writing an essay on a complex question or solving maths puzzles.
For people with significantly above-average cognitive abilities, this means they will often lack a rich sense of how, say, the bottom third of the population in terms of general mental ability performs on cognitively demanding tasks, and consequently they will tend to significantly underestimate their general intelligence relative to the general population because they inadvertently substitute the question “how smart am I compared to the general population?” – which would need to involve system-2 reasoning and consideration of not immediately available information such as the average IQ of their peer group based on e.g. occupation or educational attainment – with the easier question “how smart am I compared to my peers?” on which I expect system 1 to do reasonably well (while, as always, of course being somewhat biased in one direction or the other).
As an example, the OP says “I’m just average” but also mentions they have a college degree – which according to this website is true of 37.9% of Americans of age 25 or older. This is some, albeit relatively weak, evidence against the “average” claim depending on what the latter means (e.g. if it just means “between the first and third quartile of the general population” then evidence against this is extremely weak, while it’s somewhat stronger evidence against being very close to the population median).
This effect gets even more dramatic when the question is not just about “shallow” indicators like one’s percentile relative to the general population but about predicting performance differences in a richer way, e.g. literally predicting the essays that two different people with different ability levels would write on the same question. This is especially concerning because in most situations these richer predictions are actually all that matters. (Compare with height: it is much more useful and relevant to know, e.g., how much different levels of height will affect your health or your dating prospects or your ability to work in certain occupations or do well at certain sports, than just your height percentile relative to some population.)
I also think the point that people are really bad at comparing them to the general population because society is so stratified in various ways applies to many other traits, not just to specific cognitive abilities or general intelligence. Like, I think that question is in some ways closer to the question “at what percentile of trait X are you in the population of all people that have ever lived”, where it’s more obvious that one’s immediate intuitions are a poor guide to the answer.
(Again, all of this is about gradual effects and averages. There will of course be lots of exceptions, some of them systematic, e.g. depending on their location of work teachers will see a much broader sample and/or one selected by quite different filters than their peer group.
I also don’t mean to make any normative judgment about the societal stratification at the root of this phenomenon. If anything I think that a clear-eyed appreciation of how little many people understand of the lived experience of most others they share a polity with would be important to spread if you think that kind of stratification is problematic in various ways.)
I think you’re entirely right here. I basically take back what I said in that line.
I think the thing I originally wanted to convey there is something like “people systematically overestimate effects like Dunning-Kruger and imposter syndrome,” but I basically agree that most of the intuition I have is in pretty strongly range-restricted settings. I do basically think people are pretty poorly calibrated about where they are compared to the world.
(I also think it’s notably more likely that Olivia is above average than below average.)
Relatedly, I think social group stratification might explain some of the other comments to this post that I found surprising/tone-deaf. (e.g. the jump from “did a degree in sociology” to “you can be a sociologist in EA” felt surprising to me, as someone from a non-elite American college who casually tracks which jobs my non-STEM peers end up in).
This feels like it misses an important point. On the margin, maybe less intelligent people will have on average less of an individual impact. But given that there are far more people of average intelligence than people on the right tail of the IQ curve, if EA could tune its pitches more to people of average intelligence, it could reach a far greater audience and thereby have a larger summed impact. Right?
I think there’s also a couple other assumptions in here that aren’t obviously true. For one, it assumes a very individualistic model of impact; but it seems possible that the most impactful social movements come out of large-scale collective action, which necessarily requires involvement from broader swaths of the population. Also, I think the driving ideas in EA are not that complicated, and could be written in equally-rigorous ways that don’t require being very smart to parse.
This comment upset me because I felt that Olivia’s post was important and vulnerable, and, if I were Olivia, I would feel pushed away by this comment. But I’m rereading your comment and thinking now that you had better intentions than what I felt? Idk, I’m keeping this in here because the initial gut reaction feels valuable to name.
Anyway, on a good day, I try to aim my internet comments on this Forum to be true, necessary, and kind. I don’t always succeed, but I try my best.
This comment upset me because I felt that Olivia’s post was important and vulnerable, and, if I were Olivia, I would feel pushed away by this comment. But I’m rereading your comment and thinking now that you had better intentions than what I felt? Idk, I’m keeping this in here because the initial gut reaction feels valuable to name.
I think realizing that different people have different capacities for impact is importantly true. I also think it’s important and true to note that the EA community is less well set up to accommodate many people than other communities. I think what I said is also more kind to say, in the long run, compared to casual reassurances that makes it harder for people to understand what’s going on. I think most of the other comments do not come from an accurate model of what’s most kind to Olivia (and onlookers) in the long run.
Is my comment necessary? I don’t know. In one sense it clearly isn’t (people can clearly go about their lives without reading what I said). But in another sense, I feel better about an EA community that is more honest to potential members with best guesses about what we are and what we try to do.
In terms of “pushed away”, I will be sad if Olivia (and others) read my comment and felt dissuaded about the project of doing good. I will be much less sad about some people reading my comment and it being one component in them correctly* deciding that this community is not for them. The EA community is not a good community for everyone, and that’s okay.
(Perhaps you think, as some of the other commentators seem to, that the EA community can do a ton more to be broadly accommodating,. This is certainly something that’s tractable to work on, e.g. we can emphasizing our role models to be more like people in Strangers Drowning rather than top researchers and entrepreneurs. But I’m not working on this, and chances are, neither are you).
*There is certainly a danger of being overly prone to saying “harsh truths”, such that people are incorrectly pushed away relative to a balanced portrayal. But I still stand behind what I said, especially in the context of trying to balance out the other comments that were in this post before I commented, notably before Lukas_Gloor’s comment.
I think realizing that different people have different capacities for impact is importantly true. I also think it’s important and true to note that the EA community is less well set up to accommodate many people than other communities. I think what I said is also more kind to say, in the long run, compared to casual reassurances that makes it harder for people to understand what’s going on. I think most of the other comments do not come from an accurate model of what’s most kind to Olivia (and onlookers) in the long run.
I think it is hard to grow fast and stay nuanced but I personally am optimistic about ending up as a large community in the long-run (not next year, but maybe next decade) and I think we can sow seeds that help with that (eg. by maybe making people feel glad that they interacted with the community even if they do end up deciding that they can, at least for now, find more joy and fulfillment elsewhere).
Good question! I’m pretty uncertain about the ideal growth rate and eventual size of “the EA community”, in my mind this among the more important unresolved strategic questions (though I suspect it’ll only become significantly action-relevant in a few years).
In any case, by expressing my agreement with Linch, I didn’t mean to rule out the possibility that in the future it may be easier for a wider range of people to have a good time interacting with the EA community. And I agree that in the meantime “making people feel glad that they interacted with the community even if they do end up deciding that they can, at least for now, find more joy and fulfillment elsewhere” is (in some cases) the right goal.
Yeah, I’ve noticed that this is a big conversation right now.
My personal take
EA ideas are nuanced and ideas do/should move quickly as the world changes and our information about it changes too. It is hard to move quickly with a very large group of people.
However, the core bit of effective altruism, something like “help others as much as we can and change our minds when we’re given a good reason to”, does seem like an idea that has room for a much wider ecosystem than we have.
I’m personally hopeful we’ll get better at striking a balance.
I think it might be possible to both have a small group that is highly connected and dedicated (who maybe can move quickly) whilst also having more much adjacent people and groups that feel part of our wider team.
Multiple groups co-existing means we can broadly be more inclusive, with communities that accommodate a very wide range of caring and curious people, where everyone who cares about the effective altruism project can feel they belong and can add value.
At the same time, we can maybe still get the advantages of a smaller group, because smaller groups still exist too.
More elaboration (because I overthink everything 🤣)
Organisations like GWWC do wonders for creating a version of effective altruism that is more accessible that is distinct from the vibe of, say, the academic field of “global priorities research”.
I think it is probably worth it on the margin to invest a little more effort into the people that are sympathetic to the core effective altruism idea, but maybe might, for whatever reason, not find a full sense of meaning and belonging within the smaller group of people who are more intense and more weird.
I also think it might be helpful to put a tonne of thought into what community builders are supposed to be optimizing for. Exactly what that thing is, I’m not sure, but I feel like it hasn’t quite been nailed just yet and lots of people are trying to move us closer to this from different sides.
Some people seem to be pushing for things like less jargon and more inclusivity. Others are pointing out that there is a trade-off here because we do want some people to be thinking outside the Overton Window. The community also seems quite capacity constrained and high-fidelity communication takes so much time and effort.
If we’re trying to talk to 20 people for one hour, we’re not spending 20 hours talking to just one incredibly curious person who has plenty of reasonable objections and, therefore, need someone, or several people, to explore the various nuances with them (like people did with me, possibly mistakenly 😛, when I first became interested in effective altruism and I’m so incredibly grateful they did). If we’re spending 20 hours having in-depth conversations with one person, that means we’re not having in-depth conversations with someone else. These trade-offs sadly exist whether or not we are consciously aware of them.
I think there are some things we can do that are big wins at low cost though, like just being nice to anyone who is curious about this “effective altruism” thing (even if we don’t spend 20 hours with everyone, we can usually spend 5 minutes just saying hello and making people who care feel welcome and that them showing up is valued, because imo, it should definitely be valued!).
Personally, I hope there will be more groups that are about effective altruism ideas where more people can feel like they truly belong. These wider groups would maybe be a little bit distinct from the smaller group(s) of people who are willing to be really weird and move really fast and give up everything for the effective altruism project. However, maybe everyone, despite having their own little sub-communities, still sees each other as wider allies without needing to be under one single banner.
Basically, I feel like the core thrust of effective altruism (helping others more effectively using reason and evidence to form views) could fit a lot more people. I feel like it’s good to have more tightly knit groups who have a more specific purpose (like trying to push the frontiers of doing as much good as possible in possibly less legible ways to a large audience).
I am hopeful these two types of communities can co-exist. I personally suspect that finding ways for these two groups of people to cooperate and feel like they are on the same team could be quite good for helping us achieve our common goal of helping others better (and I think posts like this one and its response do wonders for all sorts of different people to remind us we are, in fact, all in it together, and that we can find little pockets for everyone who cares deeply to help us all help others more).
There are also limited positions in organisations as well as limited capacity of senior people to train up junior people but, again, I’m optimistic that 1) this won’t be so permanent and 2) we can work out how to better make sure the people who care deeply about effective altruism who have careers outside effective altruism organisations also feel like valued members of the community.
I think its important to define intelligence. Do we mean logic-based ability or a more broader definition (emotional intelligence, spatial, etc).
EAs are probably high in one category but low in others.
Soon I’ll need a computer to keep track of the socially awkward interactions I’ve had with EAs who seem to be mostly aligned with a certain technical domain!
awkward is pretty mild as far as ways to be emotionally stupid go. If that’s all you’re running into then EAs probably have higher than average emotional intelligence, but perhaps not as high in relative terms as their more classically defined intelligence
I think you make a mistake to make a generalization of intelligence, you assume a universalized definition when, in reality, there is no single definition of what we mean when we talk about it.
If we assume (and allow me to) that you mean IQ, I wanted to quickly comment on the controversy (and correlation) of IQ tests with white supremacy, and the perpetuation of racism in the United States and in what is commonly known as “the Global South” and the perpetuation of an ableist system that has reached eugenics.
Understanding intelligence as something you are born with and not as a social construction based on trying to categorize and standardize (at the beginning, as I say, with racist and eugenic ways) a biosocial interaction is somewhat problematic.
Honestly, and this is my personal opinion, I don’t think EA people are smart per se. I also believe (or rather, I affirm) that there is a correlation between going to a top university like Oxford or Harvard not with being excellent, but with having had the opportunity to be that. And what I call opportunity also applies to those people who have not gone to university, of course.
Anyway, in EA we have a problem when it comes to identifying ourselves as a group that could be easily resolved by investing efforts in how our dynamics work, and the ways in which we exclude other people (I’m not just referring to Olivia) and how that affects within the community, at the level of biases and at the level of the effects that all this has on the work we do.
I didn’t down/up-vote this comment but I feel the down-votes without explanation and critical engagement are a bit harsh and unfair, to be honest. So I’m going to try and give some feedback (though a bit rapidly, and maybe too rapidy to be helpful...)
It feels like just an statement of fact to say that IQ tests have a sordid history; and concepts of intelligence have been weaponised against marginalised groups historically (including women, might I add to your list ;) ). That is fair to say.
But reading this post, it feels less interested in engaging with the OP’s post let alone with Linch’s response, and more like there is something you wanted to say about intelligence and racism and have looked for a place to say that.
I don’t feel like relating the racist history of IQ tests helps the OP think about their role in EA; it doesn’t really engage with what they were saying that they feel they are average and don’t mind that, but rather just want to be empowered to do good.
I don’t feel it meaningfully engages with Linch’s central point; that the community has lots of people with attributes X in it, and is set up for people with attributes X, but maybe there are some ways the community is not optimised for other people
I think your post is not very balanced on intelligence.
general intelligence is as far as I understand a well established psychological / individual differences domain
Though this does how many people with outlying abilities in e.g. maths and sciences will—as they put it themselves—not be as strong on other intelligences, such as social. And in fairness to many EAs who are like this, they put their hands up on their intelligence shortcominds in these domains!
Of course there’s a bio(psycho)social interaction between biological inheritance and environment when it comes to intelligence. The OP’s and Linch’s points still stand with that in mind.
The correlation between top university attendance and opportunity. Notably, the strongest predictor of whether you go to Harvard is whether your parents went to Harvard; but disentangling that from a) ability and b) getting coached / moulded to show your ability in the ways you need to for Harvard admissions interviews is pretty hard. Maybe a good way of thinking of it is something like for every person who get into elite university X...:
there are 100s of more talented people not given the opportunity or moulding to succeed at this, who otherwise would trounce them, but
there are 10000s more who, no matter how much opportunity or moulding they were given, would not succeed
Anyway, in EA we have a problem when it comes to identifying ourselves as a group that could be easily resolved by investing efforts in how our dynamics work, and the ways in which we exclude other people (I’m not just referring to Olivia) and how that affects within the community, at the level of biases and at the level of the effects that all this has on the work we do.
If I’m understanding you correctly, you’re saying “we have some group dynamics problems; we involve some types of people less, and listen to some voices less”. Is that correct?
I agree—I think almost everyone would identify different weird dynamics within EA they don’t love, and ways they think the community could be more inclusive; or some might find lack of inclusiveness unpalateable but be willing to bite that bullet on trade-offs. Some good work has been done recently on starting up EA in non-Anglophone, non-Western countries, including putting forward the benefits of more local interventions; but a lot more could be done.
A new post on voices we should be listening to more, and EA assumptions which prevent this from happening would be welcome!
Thank you for your comment, at the beginning I did not understand about the downvotes and why I wasn’t getting any kind of criticism
I agree with what you say with my comment, I would not contribute anything to Olivia’s post, I realized this within hours of writing it and I did not want to delete or edit it. I prefer that the mistakes I may do remain present so that I can study a possible evolution for the near-medium future.
But reading this post, it feels less interested in engaging with the OP’s post let alone with Linch’s response, and more like there is something you wanted to say about intelligence and racism and have looked for a place to say that.
Actually, my intention was not focused at any time to bring up the issue of racism or eugenics, but more in terms of how within the EA community intelligence is conceptualized and defined as a means to measure oneself between the group and the others. I believe this, thinking about it, is a good idea to write about it in this forum.
I also point out about writing on the subject of EA dynamics, giving voices to other people and criticizing both sides that you comment
I do think intelligence is less clearly defined than it could be, and I’ve complained in the past that the definition people often use is optimized more for prediction than independent validity.
However, I think the different definitions are sufficiently correlated that it’s reasonable to us to sometimes speak of it as one thing. Consider an analogy to “humor.” Humor means different things to different people, and there’s not a widely agreed upon culture-free definition, but still it seems “I’m not funny enough to be a professional standup comedian” is a reasonable thing to say.
And my guess is that the different definitions of intelligence are more tightly correlated than different definitions (or different perspectives on) humor.
I also disagree with the implication (which rereading, you did not say outright. So perhaps I misread you) that intelligence (and merit-based systems in general) is racist. If anything, I find the idea that merit-based measurements is racist or white supremacist to be itself kind of a racist idea, not to mention condescending to nonwhites.
I agree that intelligence has environmental components. I’m not sure why this is relevant here however.
When I brought up the subject of intelligence and its definitions, it came as a result of what Olivia comments about not feeling or looking intelligent for EA and how (in your comment) your fact fourth can be understood. What I mean is, that if she (speaking in ultra-simplified and quite relative terms) is less smart than the average EA, it does not mean that she will always be less smart.
Leaving the door open to the learning and growth of possible intelligence that may be underdeveloped could be a valid option for Olivia, but I do not see that option in your comment. I do not see that you are trying to pull that idea of personal and intellectual growth.
That is, she may not know about something and has the right to learn about it, at her own pace. Perhaps in the future, she will discover in herself an expert in some of all this, but how can we know if we do not give her that option?
I also disagree with the implication (which rereading, you did not say outright. So perhaps I misread you) that intelligence (and merit-based systems in general) is racist.
Here you have really misunderstood what I said, as I mentioned before:
Actually, my intention was not focused at any time to bring up the issue of racism or eugenics, but more in terms of how within the EA community intelligence is conceptualized and defined as a means to measure oneself between the group and the others. I believe this, thinking about it, is a good idea to write about it in this forum.
Lastly, on the merit-based system, I think we can have a more distant opinion, and if you ever want to talk about it in more depth, I think this forum has private messages for it.
”I think you make a mistake to make a generalization of intelligence, you assume a universalized definition when, in reality, there is no single definition of what we mean when we talk about it.”
I think the rest of your comment detracts from this initial statement because it claims a lot and extraordinarily strong claims need extraordinarily strong evidence. It was also a very politicalized comment.
While naturally sometimes things just are political, when things toe a political party line it can sound more like rhetoric and less like rational argument and that can ring alarm bells in people’s heads. I think for political comments especially, more evidence is needed per claim because people are prone to motivated reasoning when things are tribal, I know I am certainly less rational when it comes to my political beliefs than my beliefs about chemistry, for example (I think this is probably true also of things that toe the “EA party-line” but as this is the EA forum, it makes sense that things that are more commonly thought in the EA community get justified less than they would on a forum about any other topic, but I know that I have a bias towards believing things that are commonly believed in the EA community and I really should require more evidence per claim that agrees with the EA community to correct this bias in myself, a thing that maybe I should reflect on more in the future).
I think that your comment could have been improved by 1) making it several separate comments so people could upvote and downvote the different components separately (I am such a hypocrite as my comments are often long and contain many different points but this is something I should also work on), 2) if you feel strongly that the more political parts of your comment were important to your core point, and you strongly suspect that there are parts that are true that could be fleshed out and properly justified, it would be better to maybe pick one narrow claim you made and fleshed it out a lot more, with more caveats on the bits you’re more or less confident on/that seem more or less backed by science (I personally don’t feel like those bits were important to your overall point but that’s maybe because I don’t fully understand the point you were trying to make).
I also wanted to say sorry you got downvoted so much! That always sucks, especially when it’s unclear what the reason is.
It can be hard to tell whether people disagree with your core claim or whether people felt you didn’t justify stuff enough.
I didn’t upvote or downvote but I both strongly agreed with your first sentence and felt a bit uncomfortable about how political your comment was for the reasons stated above and that might be the same reason other people downvoted.
I hope my comment is more helpful and that it wasn’t overly critical (my comments are also far from perfect)!
I thought it was worth saying that at least one reader didn’t completely disagree with everything here even if your original comment was very downvoted.
What we colloquially call “intelligence” does seem multi-dimensional, it would be very surprising to me if many people reading your comment disagreed with that (they might just think that there is some kind of intelligence that IQ tests measure that is not racist or ableist to think is valuable in some contexts for some types of tasks even if there are other types of intelligence that are harder to measure that also might be very valuable).
FWIW, I am both mixed race and also have plenty of diagnoses that makes me technically clinically insane :P (bipolar and ADHD), so if one counter-example is enough, I feel like I can be that counter-example.
I’d like to think the type of intelligence that I have is valuable too—no idea if it easily measurable in an IQ test (I don’t think IQ tests are very informative for individuals so I’ve not taken one as an adult).
Seeing my type of intelligence as valuable does not mean that other types of skills/intelligence can’t be valued too and I, personally, don’t think it makes much sense to see it as ableist or racist to value my skills/competencies/type of intelligence. We should still also value other skills/types of intelligence/competencies too.
I do think that professions that, on average, women tend to do more of and men tend to do less of, for whatever reason, are valued less (eg. paediatricians versus surgeons). I would guess that this is a type of sexism. Is this the kind of thing you were trying to point to?
I could agree with the part where I assume things related to the IQ, but I make those assumptions having previously read other EA members with clearly essentialist and biologist ideas regarding the subject of intelligence, ideas that also are also quite far from being rational. Continuing with that, in the third paragraph I comment on the problem of naturalizing something -intelligence- for which we have evidence and consensus is not as stated.
Understanding the politicization behind my following arguments, where I speak from a perspective beyond rationalist or philosophical could be the most correct thing in which I could reaffirm myself. For the next time, I might start with something about this.
I understand therefore what you say about politicization in the last paragraphs that I expose, for the next time I think I could focus more on possible evidence regarding this, something that I did not think about for a short and brief comment like this one at the beginning.
I’m going to be boring/annoying here and say some things that I think are fairly likely to be correct but may be undersaid in the other comments:
EAs on average are noticeably smarter than most of the general population
Intelligence is an important component for doing good in the world.
The EA community is also set up in a way that amplifies this, relative to much of how the rest of the world operates.
Most people on average are reasonably well-calibrated about how smart they are.(To be clear exceptions certainly exist)EDIT: This is false, see Max Daniel’s comment.If you’re less smart than average for EAs (or less driven, or less altruistic, or less hardworking, or have less of a social safety net), than on average I’d expect you to be less good at having a positive impact than others.
But this is in relative terms, in absolute terms I think it’s certainly possible to have a large impact still.
Our community is not (currently) set up well to accommodate the contributions of many people who don’t check certain boxes, so I expect there to be more of an uphill battle for many such people.
I don’t think this should dissuade you from the project of (effectively) doing good, but I understand and emphasize if this makes you frustrated.
(I think you probably agree with most of what I say below and didn’t intend to claim otherwise, reading your claim just made me notice and write out the following.)
Hmm, I would guess that people on average (with some notable pretty extreme outliers in both directions, e.g. in imposter syndrome on one hand and the grandiose variety of narcissistic personality disorder on the other hand, not to mention more drastic things like psychosis) are pretty calibrated about how their cognitive abilities compare to their peers but tend to be really bad at assessing how they compare to the general population because most high-income countries are quite stratified by intelligence.
(E.g., if you have or are pursuing a college degree, ask yourself what fraction of people that you know well do not and will never have a college degree. Of course, having a college degree is not the same as being intelligent, and in fact as pointed out in other comments if you’re reading this Forum you probably know, or have read content by, at least a couple of people who arguably are extremely intelligent but don’t have a degree. But the correlation is sufficiently strong that the answer to that question tells you something about stratification by intelligence.)
That is, a lot of people simply don’t know that many people with wildly different levels of general mental ability. Interactions between them happen, but tend to be in narrow and regimented contexts such as one person handing another person cash and receiving a purchased item in return, and at most include things like small talk that are significantly less diagnostic of cognitive abilities than more cognitively demanding tasks such as writing an essay on a complex question or solving maths puzzles.
For people with significantly above-average cognitive abilities, this means they will often lack a rich sense of how, say, the bottom third of the population in terms of general mental ability performs on cognitively demanding tasks, and consequently they will tend to significantly underestimate their general intelligence relative to the general population because they inadvertently substitute the question “how smart am I compared to the general population?” – which would need to involve system-2 reasoning and consideration of not immediately available information such as the average IQ of their peer group based on e.g. occupation or educational attainment – with the easier question “how smart am I compared to my peers?” on which I expect system 1 to do reasonably well (while, as always, of course being somewhat biased in one direction or the other).
As an example, the OP says “I’m just average” but also mentions they have a college degree – which according to this website is true of 37.9% of Americans of age 25 or older. This is some, albeit relatively weak, evidence against the “average” claim depending on what the latter means (e.g. if it just means “between the first and third quartile of the general population” then evidence against this is extremely weak, while it’s somewhat stronger evidence against being very close to the population median).
This effect gets even more dramatic when the question is not just about “shallow” indicators like one’s percentile relative to the general population but about predicting performance differences in a richer way, e.g. literally predicting the essays that two different people with different ability levels would write on the same question. This is especially concerning because in most situations these richer predictions are actually all that matters. (Compare with height: it is much more useful and relevant to know, e.g., how much different levels of height will affect your health or your dating prospects or your ability to work in certain occupations or do well at certain sports, than just your height percentile relative to some population.)
I also think the point that people are really bad at comparing them to the general population because society is so stratified in various ways applies to many other traits, not just to specific cognitive abilities or general intelligence. Like, I think that question is in some ways closer to the question “at what percentile of trait X are you in the population of all people that have ever lived”, where it’s more obvious that one’s immediate intuitions are a poor guide to the answer.
(Again, all of this is about gradual effects and averages. There will of course be lots of exceptions, some of them systematic, e.g. depending on their location of work teachers will see a much broader sample and/or one selected by quite different filters than their peer group.
I also don’t mean to make any normative judgment about the societal stratification at the root of this phenomenon. If anything I think that a clear-eyed appreciation of how little many people understand of the lived experience of most others they share a polity with would be important to spread if you think that kind of stratification is problematic in various ways.)
I think you’re entirely right here. I basically take back what I said in that line.
I think the thing I originally wanted to convey there is something like “people systematically overestimate effects like Dunning-Kruger and imposter syndrome,” but I basically agree that most of the intuition I have is in pretty strongly range-restricted settings. I do basically think people are pretty poorly calibrated about where they are compared to the world.
(I also think it’s notably more likely that Olivia is above average than below average.)
Relatedly, I think social group stratification might explain some of the other comments to this post that I found surprising/tone-deaf. (e.g. the jump from “did a degree in sociology” to “you can be a sociologist in EA” felt surprising to me, as someone from a non-elite American college who casually tracks which jobs my non-STEM peers end up in).
Yes, that’s my guess as well.
This feels like it misses an important point. On the margin, maybe less intelligent people will have on average less of an individual impact. But given that there are far more people of average intelligence than people on the right tail of the IQ curve, if EA could tune its pitches more to people of average intelligence, it could reach a far greater audience and thereby have a larger summed impact. Right?
I think there’s also a couple other assumptions in here that aren’t obviously true. For one, it assumes a very individualistic model of impact; but it seems possible that the most impactful social movements come out of large-scale collective action, which necessarily requires involvement from broader swaths of the population. Also, I think the driving ideas in EA are not that complicated, and could be written in equally-rigorous ways that don’t require being very smart to parse.
This comment upset me because I felt that Olivia’s post was important and vulnerable, and, if I were Olivia, I would feel pushed away by this comment. But I’m rereading your comment and thinking now that you had better intentions than what I felt? Idk, I’m keeping this in here because the initial gut reaction feels valuable to name.
Thanks I appreciate this feedback.
Anyway, on a good day, I try to aim my internet comments on this Forum to be true, necessary, and kind. I don’t always succeed, but I try my best.
I think realizing that different people have different capacities for impact is importantly true. I also think it’s important and true to note that the EA community is less well set up to accommodate many people than other communities. I think what I said is also more kind to say, in the long run, compared to casual reassurances that makes it harder for people to understand what’s going on. I think most of the other comments do not come from an accurate model of what’s most kind to Olivia (and onlookers) in the long run.
Is my comment necessary? I don’t know. In one sense it clearly isn’t (people can clearly go about their lives without reading what I said). But in another sense, I feel better about an EA community that is more honest to potential members with best guesses about what we are and what we try to do.
In terms of “pushed away”, I will be sad if Olivia (and others) read my comment and felt dissuaded about the project of doing good. I will be much less sad about some people reading my comment and it being one component in them correctly* deciding that this community is not for them. The EA community is not a good community for everyone, and that’s okay.
(Perhaps you think, as some of the other commentators seem to, that the EA community can do a ton more to be broadly accommodating,. This is certainly something that’s tractable to work on, e.g. we can emphasizing our role models to be more like people in Strangers Drowning rather than top researchers and entrepreneurs. But I’m not working on this, and chances are, neither are you).
*There is certainly a danger of being overly prone to saying “harsh truths”, such that people are incorrectly pushed away relative to a balanced portrayal. But I still stand behind what I said, especially in the context of trying to balance out the other comments that were in this post before I commented, notably before Lukas_Gloor’s comment.
FWIW I strongly agree with this.
Will we permanently have low capacity?
I think it is hard to grow fast and stay nuanced but I personally am optimistic about ending up as a large community in the long-run (not next year, but maybe next decade) and I think we can sow seeds that help with that (eg. by maybe making people feel glad that they interacted with the community even if they do end up deciding that they can, at least for now, find more joy and fulfillment elsewhere).
Good question! I’m pretty uncertain about the ideal growth rate and eventual size of “the EA community”, in my mind this among the more important unresolved strategic questions (though I suspect it’ll only become significantly action-relevant in a few years).
In any case, by expressing my agreement with Linch, I didn’t mean to rule out the possibility that in the future it may be easier for a wider range of people to have a good time interacting with the EA community. And I agree that in the meantime “making people feel glad that they interacted with the community even if they do end up deciding that they can, at least for now, find more joy and fulfillment elsewhere” is (in some cases) the right goal.
Thanks 😊.
Yeah, I’ve noticed that this is a big conversation right now.
My personal take
EA ideas are nuanced and ideas do/should move quickly as the world changes and our information about it changes too. It is hard to move quickly with a very large group of people.
However, the core bit of effective altruism, something like “help others as much as we can and change our minds when we’re given a good reason to”, does seem like an idea that has room for a much wider ecosystem than we have.
I’m personally hopeful we’ll get better at striking a balance.
I think it might be possible to both have a small group that is highly connected and dedicated (who maybe can move quickly) whilst also having more much adjacent people and groups that feel part of our wider team.
Multiple groups co-existing means we can broadly be more inclusive, with communities that accommodate a very wide range of caring and curious people, where everyone who cares about the effective altruism project can feel they belong and can add value.
At the same time, we can maybe still get the advantages of a smaller group, because smaller groups still exist too.
More elaboration (because I overthink everything 🤣)
Organisations like GWWC do wonders for creating a version of effective altruism that is more accessible that is distinct from the vibe of, say, the academic field of “global priorities research”.
I think it is probably worth it on the margin to invest a little more effort into the people that are sympathetic to the core effective altruism idea, but maybe might, for whatever reason, not find a full sense of meaning and belonging within the smaller group of people who are more intense and more weird.
I also think it might be helpful to put a tonne of thought into what community builders are supposed to be optimizing for. Exactly what that thing is, I’m not sure, but I feel like it hasn’t quite been nailed just yet and lots of people are trying to move us closer to this from different sides.
Some people seem to be pushing for things like less jargon and more inclusivity. Others are pointing out that there is a trade-off here because we do want some people to be thinking outside the Overton Window. The community also seems quite capacity constrained and high-fidelity communication takes so much time and effort.
If we’re trying to talk to 20 people for one hour, we’re not spending 20 hours talking to just one incredibly curious person who has plenty of reasonable objections and, therefore, need someone, or several people, to explore the various nuances with them (like people did with me, possibly mistakenly 😛, when I first became interested in effective altruism and I’m so incredibly grateful they did). If we’re spending 20 hours having in-depth conversations with one person, that means we’re not having in-depth conversations with someone else. These trade-offs sadly exist whether or not we are consciously aware of them.
I think there are some things we can do that are big wins at low cost though, like just being nice to anyone who is curious about this “effective altruism” thing (even if we don’t spend 20 hours with everyone, we can usually spend 5 minutes just saying hello and making people who care feel welcome and that them showing up is valued, because imo, it should definitely be valued!).
Personally, I hope there will be more groups that are about effective altruism ideas where more people can feel like they truly belong. These wider groups would maybe be a little bit distinct from the smaller group(s) of people who are willing to be really weird and move really fast and give up everything for the effective altruism project. However, maybe everyone, despite having their own little sub-communities, still sees each other as wider allies without needing to be under one single banner.
Basically, I feel like the core thrust of effective altruism (helping others more effectively using reason and evidence to form views) could fit a lot more people. I feel like it’s good to have more tightly knit groups who have a more specific purpose (like trying to push the frontiers of doing as much good as possible in possibly less legible ways to a large audience).
I am hopeful these two types of communities can co-exist. I personally suspect that finding ways for these two groups of people to cooperate and feel like they are on the same team could be quite good for helping us achieve our common goal of helping others better (and I think posts like this one and its response do wonders for all sorts of different people to remind us we are, in fact, all in it together, and that we can find little pockets for everyone who cares deeply to help us all help others more).
There are also limited positions in organisations as well as limited capacity of senior people to train up junior people but, again, I’m optimistic that 1) this won’t be so permanent and 2) we can work out how to better make sure the people who care deeply about effective altruism who have careers outside effective altruism organisations also feel like valued members of the community.
I think its important to define intelligence. Do we mean logic-based ability or a more broader definition (emotional intelligence, spatial, etc).
EAs are probably high in one category but low in others.
Soon I’ll need a computer to keep track of the socially awkward interactions I’ve had with EAs who seem to be mostly aligned with a certain technical domain!
Others I talk seem to have a similar experiences.
awkward is pretty mild as far as ways to be emotionally stupid go. If that’s all you’re running into then EAs probably have higher than average emotional intelligence, but perhaps not as high in relative terms as their more classically defined intelligence
I think you make a mistake to make a generalization of intelligence, you assume a universalized definition when, in reality, there is no single definition of what we mean when we talk about it.
If we assume (and allow me to) that you mean IQ, I wanted to quickly comment on the controversy (and correlation) of IQ tests with white supremacy, and the perpetuation of racism in the United States and in what is commonly known as “the Global South” and the perpetuation of an ableist system that has reached eugenics.
Understanding intelligence as something you are born with and not as a social construction based on trying to categorize and standardize (at the beginning, as I say, with racist and eugenic ways) a biosocial interaction is somewhat problematic.
Honestly, and this is my personal opinion, I don’t think EA people are smart per se. I also believe (or rather, I affirm) that there is a correlation between going to a top university like Oxford or Harvard not with being excellent, but with having had the opportunity to be that. And what I call opportunity also applies to those people who have not gone to university, of course.
Anyway, in EA we have a problem when it comes to identifying ourselves as a group that could be easily resolved by investing efforts in how our dynamics work, and the ways in which we exclude other people (I’m not just referring to Olivia) and how that affects within the community, at the level of biases and at the level of the effects that all this has on the work we do.
I didn’t down/up-vote this comment but I feel the down-votes without explanation and critical engagement are a bit harsh and unfair, to be honest. So I’m going to try and give some feedback (though a bit rapidly, and maybe too rapidy to be helpful...)
It feels like just an statement of fact to say that IQ tests have a sordid history; and concepts of intelligence have been weaponised against marginalised groups historically (including women, might I add to your list ;) ). That is fair to say.
But reading this post, it feels less interested in engaging with the OP’s post let alone with Linch’s response, and more like there is something you wanted to say about intelligence and racism and have looked for a place to say that.
I don’t feel like relating the racist history of IQ tests helps the OP think about their role in EA; it doesn’t really engage with what they were saying that they feel they are average and don’t mind that, but rather just want to be empowered to do good.
I don’t feel it meaningfully engages with Linch’s central point; that the community has lots of people with attributes X in it, and is set up for people with attributes X, but maybe there are some ways the community is not optimised for other people
I think your post is not very balanced on intelligence.
general intelligence is as far as I understand a well established psychological / individual differences domain
Though this does how many people with outlying abilities in e.g. maths and sciences will—as they put it themselves—not be as strong on other intelligences, such as social. And in fairness to many EAs who are like this, they put their hands up on their intelligence shortcominds in these domains!
Of course there’s a bio(psycho)social interaction between biological inheritance and environment when it comes to intelligence. The OP’s and Linch’s points still stand with that in mind.
The correlation between top university attendance and opportunity. Notably, the strongest predictor of whether you go to Harvard is whether your parents went to Harvard; but disentangling that from a) ability and b) getting coached / moulded to show your ability in the ways you need to for Harvard admissions interviews is pretty hard. Maybe a good way of thinking of it is something like for every person who get into elite university X...:
there are 100s of more talented people not given the opportunity or moulding to succeed at this, who otherwise would trounce them, but
there are 10000s more who, no matter how much opportunity or moulding they were given, would not succeed
If I’m understanding you correctly, you’re saying “we have some group dynamics problems; we involve some types of people less, and listen to some voices less”. Is that correct?
I agree—I think almost everyone would identify different weird dynamics within EA they don’t love, and ways they think the community could be more inclusive; or some might find lack of inclusiveness unpalateable but be willing to bite that bullet on trade-offs. Some good work has been done recently on starting up EA in non-Anglophone, non-Western countries, including putting forward the benefits of more local interventions; but a lot more could be done.
A new post on voices we should be listening to more, and EA assumptions which prevent this from happening would be welcome!
Thank you for your comment, at the beginning I did not understand about the downvotes and why I wasn’t getting any kind of criticism
I agree with what you say with my comment, I would not contribute anything to Olivia’s post, I realized this within hours of writing it and I did not want to delete or edit it. I prefer that the mistakes I may do remain present so that I can study a possible evolution for the near-medium future.
Actually, my intention was not focused at any time to bring up the issue of racism or eugenics, but more in terms of how within the EA community intelligence is conceptualized and defined as a means to measure oneself between the group and the others. I believe this, thinking about it, is a good idea to write about it in this forum.
I also point out about writing on the subject of EA dynamics, giving voices to other people and criticizing both sides that you comment
Nothing to add—just wanted to explicitly say I appreciate a lot that you took the time to write the comment I was too lazy to.
I do think intelligence is less clearly defined than it could be, and I’ve complained in the past that the definition people often use is optimized more for prediction than independent validity.
However, I think the different definitions are sufficiently correlated that it’s reasonable to us to sometimes speak of it as one thing. Consider an analogy to “humor.” Humor means different things to different people, and there’s not a widely agreed upon culture-free definition, but still it seems “I’m not funny enough to be a professional standup comedian” is a reasonable thing to say.
And my guess is that the different definitions of intelligence are more tightly correlated than different definitions (or different perspectives on) humor.
I also disagree with the implication (which rereading, you did not say outright. So perhaps I misread you) that intelligence (and merit-based systems in general) is racist. If anything, I find the idea that merit-based measurements is racist or white supremacist to be itself kind of a racist idea, not to mention condescending to nonwhites.
I agree that intelligence has environmental components. I’m not sure why this is relevant here however.
Hi Linch! Thanks for your comment
When I brought up the subject of intelligence and its definitions, it came as a result of what Olivia comments about not feeling or looking intelligent for EA and how (in your comment) your fact fourth can be understood. What I mean is, that if she (speaking in ultra-simplified and quite relative terms) is less smart than the average EA, it does not mean that she will always be less smart.
Leaving the door open to the learning and growth of possible intelligence that may be underdeveloped could be a valid option for Olivia, but I do not see that option in your comment. I do not see that you are trying to pull that idea of personal and intellectual growth.
That is, she may not know about something and has the right to learn about it, at her own pace. Perhaps in the future, she will discover in herself an expert in some of all this, but how can we know if we do not give her that option?
Here you have really misunderstood what I said, as I mentioned before:
Lastly, on the merit-based system, I think we can have a more distant opinion, and if you ever want to talk about it in more depth, I think this forum has private messages for it.
I strongly agree with:
”I think you make a mistake to make a generalization of intelligence, you assume a universalized definition when, in reality, there is no single definition of what we mean when we talk about it.”
I think the rest of your comment detracts from this initial statement because it claims a lot and extraordinarily strong claims need extraordinarily strong evidence. It was also a very politicalized comment.
While naturally sometimes things just are political, when things toe a political party line it can sound more like rhetoric and less like rational argument and that can ring alarm bells in people’s heads. I think for political comments especially, more evidence is needed per claim because people are prone to motivated reasoning when things are tribal, I know I am certainly less rational when it comes to my political beliefs than my beliefs about chemistry, for example (I think this is probably true also of things that toe the “EA party-line” but as this is the EA forum, it makes sense that things that are more commonly thought in the EA community get justified less than they would on a forum about any other topic, but I know that I have a bias towards believing things that are commonly believed in the EA community and I really should require more evidence per claim that agrees with the EA community to correct this bias in myself, a thing that maybe I should reflect on more in the future).
I think that your comment could have been improved by
1) making it several separate comments so people could upvote and downvote the different components separately (I am such a hypocrite as my comments are often long and contain many different points but this is something I should also work on),
2) if you feel strongly that the more political parts of your comment were important to your core point, and you strongly suspect that there are parts that are true that could be fleshed out and properly justified, it would be better to maybe pick one narrow claim you made and fleshed it out a lot more, with more caveats on the bits you’re more or less confident on/that seem more or less backed by science (I personally don’t feel like those bits were important to your overall point but that’s maybe because I don’t fully understand the point you were trying to make).
I also wanted to say sorry you got downvoted so much! That always sucks, especially when it’s unclear what the reason is.
It can be hard to tell whether people disagree with your core claim or whether people felt you didn’t justify stuff enough.
I didn’t upvote or downvote but I both strongly agreed with your first sentence and felt a bit uncomfortable about how political your comment was for the reasons stated above and that might be the same reason other people downvoted.
I hope my comment is more helpful and that it wasn’t overly critical (my comments are also far from perfect)!
I thought it was worth saying that at least one reader didn’t completely disagree with everything here even if your original comment was very downvoted.
What we colloquially call “intelligence” does seem multi-dimensional, it would be very surprising to me if many people reading your comment disagreed with that (they might just think that there is some kind of intelligence that IQ tests measure that is not racist or ableist to think is valuable in some contexts for some types of tasks even if there are other types of intelligence that are harder to measure that also might be very valuable).
FWIW, I am both mixed race and also have plenty of diagnoses that makes me technically clinically insane :P (bipolar and ADHD), so if one counter-example is enough, I feel like I can be that counter-example.
I’d like to think the type of intelligence that I have is valuable too—no idea if it easily measurable in an IQ test (I don’t think IQ tests are very informative for individuals so I’ve not taken one as an adult).
Seeing my type of intelligence as valuable does not mean that other types of skills/intelligence can’t be valued too and I, personally, don’t think it makes much sense to see it as ableist or racist to value my skills/competencies/type of intelligence. We should still also value other skills/types of intelligence/competencies too.
I do think that professions that, on average, women tend to do more of and men tend to do less of, for whatever reason, are valued less (eg. paediatricians versus surgeons). I would guess that this is a type of sexism. Is this the kind of thing you were trying to point to?
Hi, thank you for your comment.
I could agree with the part where I assume things related to the IQ, but I make those assumptions having previously read other EA members with clearly essentialist and biologist ideas regarding the subject of intelligence, ideas that also are also quite far from being rational. Continuing with that, in the third paragraph I comment on the problem of naturalizing something -intelligence- for which we have evidence and consensus is not as stated.
Understanding the politicization behind my following arguments, where I speak from a perspective beyond rationalist or philosophical could be the most correct thing in which I could reaffirm myself. For the next time, I might start with something about this.
I understand therefore what you say about politicization in the last paragraphs that I expose, for the next time I think I could focus more on possible evidence regarding this, something that I did not think about for a short and brief comment like this one at the beginning.