But I think the situation can also be somewhat rosier than that.
Ozy once told me that the law of comparative advantage was one of the most inspirational things they had ever read. This was sufficiently strange that I demanded an explanation.
Ozy said that it proves everyone can contribute. Even if you are worse than everyone else at everything, you can still participate in global trade and other people will pay you money. It may not be very much money, but it will be some, and it will be a measure of how your actions are making other people better off and they are grateful for your existence.
Might prove reassuring. Yes, EA has lots of very smart people, but those people exist in an ecosystem which almost everyone can contribute to. People do and should give kudos to those who do the object level work required to keep the attention of the geniuses on the parts of the problems which need them.
As some examples of helpful things available to you:
Being an extra pair of hands at events
Asking someone who you think is aligned with your values and might have too much on their plate what you can help them with (if you actually have the bandwidth to follow through)
Making yourself available to on-board newcomers to the ideas in 1-on-1 conversations
I also want to chime in here and say that it was a bit of a shock for me coming into the EA community also: I was one of the more analytical people in most of my friendship groups, yet it was pretty quickly clear to me that my comparative advantage in this community was actually EQ, communications, and management. I’m glad to work with some incredibly smart analytical people who are kind enough to (a) help me understand things that confuse me when I’m frank about what I don’t understand; and (b) remind me what else I bring to the table.
Luke needing to be reminded what he brings to the table I think is evidence that we’re missing out on many extremely talented people who aren’t 99.9th percentile on one particular skillset that we overselect for.
As a counter-example, I am below average in many skills that people in my wider-peer group have that, I believe, would be incredibly helpful to the effective altruism movement. However, I am good at a very narrow type of things that are easy to signal in conversation that makes people in the EA community often think way more highly of me than, I believe, is rational.
I have found easy social acceptance in this community because I speak fluent mathematics. I have higher IQ friends who, in high-trust conversations, are extremely epistemically humble and have a lot to contribute but who I can’t easily integrate into the effective altruism community.
I believe that part of what makes it hard to introduce people who aren’t exceptionally analytical to effective altruism is because there seems to be a stronger prior that intelligence and competence are one-dimensional (or all types of competence and intelligence are correlated) in a way there isn’t so much this prior elsewhere. It does seem true that some people are more intelligent/skilled than others on many different dimensions we might care about and this is maybe a taboo thing to say in many contexts. However, competence and intelligence are multi-dimensional and different types of intelligence/skills seem to me unlikely to be perfectly correlated with each other. I’d guess some are probably anti-correlated (we each have a limited number of neurons, surely if those neurons are highly specialized at solving one type of problem then there are going to be trade-offs which mean, at the skill frontier, it seems likely that this scarce brain capacity trades-off against other specialized skills).
To find someone good at marketing, we possibly had to find the one marketing guy who happened to be way above average in pretty much everything, including analytic intelligence (who was only 99th percentile analytic instead of 99.9th percentile and so needs reminding of his value in a community of people that very heavily rewards analytical thinking).
While analytic reasoning can be handy, it is not the only skill worth having and I don’t think you need to have that much of that particular skill to understand the core EA ideas enough to be a very valuable contributor to this community. Being exceptionally good at reasoning transparency and analytic philosophy is not perfectly correlated with many other types of skills or intelligence desperately needed within the effective altruism community for the EA community to maximize its impact. While some types of skills and intelligence have synergies and often come together, I suspect that other skills have different synergies.
If this model is accurate, then some skills are likely to be anti-correlated with the capacity to show large degrees of reasoning transparency and impress in EA-style conversations.
If those are skills we are in desperate need of, saying this movement isn’t for anyone who doesn’t find the forum very easy to read or doesn’t find analytical conversations as effortless might very well cause us to be much lower impact than we otherwise could be.
Comparative advantage is a thing and, as far as I’ve observed, skillsets and personalities do seem to cluster together.
If we want our movement to maximize its impact, then we can’t just select for the people who are exceptionally analytical at the detriment of losing out on people who are exceptionally good at, e.g. marketing or policy (I suspect it could be harder to find top people to work in AI governance without there being room for a greater variety of people who care deeply about helping others).
In short, if my model is correct, being a bit different to other people in the effective altruism community is evidence that you might have a comparative advantage (and maybe even an absolute advantage) within our community and you are paving the way for other people who are less weird in ways people in EA tend to be weird to find belonging here.
I strongly believe that if you care deeply about others in an impartial way, that carving out space for you is very much in the best interest of this community (and that if the EA community is a place you want to be, finding your place in it is going to help others feel like they, too, have a place). It is also fine for you to just do what’s good for you too and if the EA community isn’t healthy for you for whatever reason, it’s fine to bring what you like and discard the rest elsewhere too!
I love this post. It is so hard to communicate that the 2nd moment of a distribution (how much any person or thing tends to differ from the average[1]) is often important enough that what is true on average often doesn’t apply very well to any individual (and platitudes that are technically false can therefore often be directionally correct in EA/LessWrong circles).
Some of my personal thoughts on jargon and why I chose, pretty insensitively given the context of this post, to use some anyway
I used the “second moment of a distribution” jargon here initially (without the definition that I later edited in) because I feel like sometimes people talk past each other. I wanted to say what I meant in a way that could be understood more by people who might not be sure exactly what everyone else precisely meant. Plain English sometimes lacks precision for the sake of being inclusive (inclusivity that I personally think is incredibly valuable, not just in the context of this post). And often precision is totally unnecessary to get across the key idea.
However, when you say something in language that is a little less precise, it naturally has more room for different interpretations. Some interpretations readers might agree with and some they might not. The reason jargon tends to exist is because it is really precise. I was trying to find a really precise way of saying the vibe of what many other people were saying so everyone all felt a tiny bit more on the same page (no idea if I succeeded though or if it was actually worth it or if it was actually even needed and whether this is all actually just in my head).
For what it’s worth, I think the term “variance” is much more accessible than “second moment”.
Variance is a relatively common word. I think in many cases we can be more inclusive without losing precision (another example is “how much I’m sure of this” vs “epistemic status”)
I think in hindsight I might literally have been subconsciously indicating in-groupness (“indicating in-groupness” means trying to show I fit in 🤮 -- feels so much worse in plain English for a reason, jargon is more precise but still often less obvious what is meant, so it’s often easier to hide behind it) because my dumb brain likes for people to think I’m smarter than I am.
In my defense, it’s so easy to, in the moment, to use the first way of expressing what I mean that comes to mind.
I am sure that I am more likely to think of technical ways of expressing myself because technical language makes a person sound smart and sounding smart gets socially rewarded.
I so strongly reflectively disagree with this impulse but the tribal instinct to fit in really is so strong (in every human being) and really hard to notice in the moment.
I think it takes much more brain power to find the precise and accessible way to say something so, ironically, more technical language often means the opposite of the impression it gives.
This whole thing reminds me of the Richard Feymann take that if you can’t explain something in language everyone can understand, that’s probably because you don’t understand it well enough. I think that we, as a community, would be better off if we managed to get good at rewarding more precise and accessible language and better at punishing unnecessary uses of jargon (like here!!!).[1]
I kind of love the irony of me having clearly done something that I think is a pretty perfect example of exactly what I, when I reflect, believe we need to do a whole lot less of as a community🤣
I think it’s also good to be nice on the forum and I think Lorenzo nailed this balance perfectly. Their comment was friendly and kind, with a suggested replacement term, but still made me feel like using unnecessary jargon was a bad thing (making using unnecessary jargon feel like something I shouldn’t have done which will likely make my subconscious less likely to instinctively want to use unnecessary jargon in the future👌).
It’s just my general feeling on the forum recently that a few different groups of people are talking past each other sometimes and all saying valuable true things (but still, as always, people generally are good at finding common ground which is something I love about the EA community).
Really, I just really want everyone reading to understand where everyone else is coming from. This vaguely makes me want to be more precise when other people are saying the same thing in plain English. It also makes me want to optimise for accessibility when everyone else is saying something in technical jargon that is an idea that more people could get value from understanding.
Ideally I’d be a good enough at writing to be precise and accessible at the same time though (but both precision and making comments easier to understand for a broader group of readers is so time consuming so I often try to either do one or the other and sometimes I’m terrible and make a quick comment that is definitely neither 🤣).
The Parable of the Talents, especially the part starting at:
Might prove reassuring. Yes, EA has lots of very smart people, but those people exist in an ecosystem which almost everyone can contribute to. People do and should give kudos to those who do the object level work required to keep the attention of the geniuses on the parts of the problems which need them.
As some examples of helpful things available to you:
Being an extra pair of hands at events
Asking someone who you think is aligned with your values and might have too much on their plate what you can help them with (if you actually have the bandwidth to follow through)
Making yourself available to on-board newcomers to the ideas in 1-on-1 conversations
I also want to chime in here and say that it was a bit of a shock for me coming into the EA community also: I was one of the more analytical people in most of my friendship groups, yet it was pretty quickly clear to me that my comparative advantage in this community was actually EQ, communications, and management. I’m glad to work with some incredibly smart analytical people who are kind enough to (a) help me understand things that confuse me when I’m frank about what I don’t understand; and (b) remind me what else I bring to the table.
Luke needing to be reminded what he brings to the table I think is evidence that we’re missing out on many extremely talented people who aren’t 99.9th percentile on one particular skillset that we overselect for.
As a counter-example, I am below average in many skills that people in my wider-peer group have that, I believe, would be incredibly helpful to the effective altruism movement. However, I am good at a very narrow type of things that are easy to signal in conversation that makes people in the EA community often think way more highly of me than, I believe, is rational.
I have found easy social acceptance in this community because I speak fluent mathematics. I have higher IQ friends who, in high-trust conversations, are extremely epistemically humble and have a lot to contribute but who I can’t easily integrate into the effective altruism community.
I believe that part of what makes it hard to introduce people who aren’t exceptionally analytical to effective altruism is because there seems to be a stronger prior that intelligence and competence are one-dimensional (or all types of competence and intelligence are correlated) in a way there isn’t so much this prior elsewhere. It does seem true that some people are more intelligent/skilled than others on many different dimensions we might care about and this is maybe a taboo thing to say in many contexts. However, competence and intelligence are multi-dimensional and different types of intelligence/skills seem to me unlikely to be perfectly correlated with each other. I’d guess some are probably anti-correlated (we each have a limited number of neurons, surely if those neurons are highly specialized at solving one type of problem then there are going to be trade-offs which mean, at the skill frontier, it seems likely that this scarce brain capacity trades-off against other specialized skills).
To find someone good at marketing, we possibly had to find the one marketing guy who happened to be way above average in pretty much everything, including analytic intelligence (who was only 99th percentile analytic instead of 99.9th percentile and so needs reminding of his value in a community of people that very heavily rewards analytical thinking).
While analytic reasoning can be handy, it is not the only skill worth having and I don’t think you need to have that much of that particular skill to understand the core EA ideas enough to be a very valuable contributor to this community. Being exceptionally good at reasoning transparency and analytic philosophy is not perfectly correlated with many other types of skills or intelligence desperately needed within the effective altruism community for the EA community to maximize its impact. While some types of skills and intelligence have synergies and often come together, I suspect that other skills have different synergies.
If this model is accurate, then some skills are likely to be anti-correlated with the capacity to show large degrees of reasoning transparency and impress in EA-style conversations.
If those are skills we are in desperate need of, saying this movement isn’t for anyone who doesn’t find the forum very easy to read or doesn’t find analytical conversations as effortless might very well cause us to be much lower impact than we otherwise could be.
Comparative advantage is a thing and, as far as I’ve observed, skillsets and personalities do seem to cluster together.
If we want our movement to maximize its impact, then we can’t just select for the people who are exceptionally analytical at the detriment of losing out on people who are exceptionally good at, e.g. marketing or policy (I suspect it could be harder to find top people to work in AI governance without there being room for a greater variety of people who care deeply about helping others).
In short, if my model is correct, being a bit different to other people in the effective altruism community is evidence that you might have a comparative advantage (and maybe even an absolute advantage) within our community and you are paving the way for other people who are less weird in ways people in EA tend to be weird to find belonging here.
I strongly believe that if you care deeply about others in an impartial way, that carving out space for you is very much in the best interest of this community (and that if the EA community is a place you want to be, finding your place in it is going to help others feel like they, too, have a place). It is also fine for you to just do what’s good for you too and if the EA community isn’t healthy for you for whatever reason, it’s fine to bring what you like and discard the rest elsewhere too!
Another relevant Slate Star Codex post is Against Individual IQ Worries.
I love this post. It is so hard to communicate that the 2nd moment of a distribution (how much any person or thing tends to differ from the average[1]) is often important enough that what is true on average often doesn’t apply very well to any individual (and platitudes that are technically false can therefore often be directionally correct in EA/LessWrong circles).
This definition was edited in because I only thought of an okay definition ages later.
Some of my personal thoughts on jargon and why I chose, pretty insensitively given the context of this post, to use some anyway
I used the “second moment of a distribution” jargon here initially (without the definition that I later edited in) because I feel like sometimes people talk past each other. I wanted to say what I meant in a way that could be understood more by people who might not be sure exactly what everyone else precisely meant. Plain English sometimes lacks precision for the sake of being inclusive (inclusivity that I personally think is incredibly valuable, not just in the context of this post). And often precision is totally unnecessary to get across the key idea.
However, when you say something in language that is a little less precise, it naturally has more room for different interpretations. Some interpretations readers might agree with and some they might not. The reason jargon tends to exist is because it is really precise. I was trying to find a really precise way of saying the vibe of what many other people were saying so everyone all felt a tiny bit more on the same page (no idea if I succeeded though or if it was actually worth it or if it was actually even needed and whether this is all actually just in my head).
For what it’s worth, I think the term “variance” is much more accessible than “second moment”.
Variance is a relatively common word. I think in many cases we can be more inclusive without losing precision (another example is “how much I’m sure of this” vs “epistemic status”)
lol, yeah, totally agree (strong upvoted).
I think in hindsight I might literally have been subconsciously indicating in-groupness (“indicating in-groupness” means trying to show I fit in 🤮 -- feels so much worse in plain English for a reason, jargon is more precise but still often less obvious what is meant, so it’s often easier to hide behind it) because my dumb brain likes for people to think I’m smarter than I am.
In my defense, it’s so easy to, in the moment, to use the first way of expressing what I mean that comes to mind.
I am sure that I am more likely to think of technical ways of expressing myself because technical language makes a person sound smart and sounding smart gets socially rewarded.
I so strongly reflectively disagree with this impulse but the tribal instinct to fit in really is so strong (in every human being) and really hard to notice in the moment.
I think it takes much more brain power to find the precise and accessible way to say something so, ironically, more technical language often means the opposite of the impression it gives.
This whole thing reminds me of the Richard Feymann take that if you can’t explain something in language everyone can understand, that’s probably because you don’t understand it well enough. I think that we, as a community, would be better off if we managed to get good at rewarding more precise and accessible language and better at punishing unnecessary uses of jargon (like here!!!).[1]
I kind of love the irony of me having clearly done something that I think is a pretty perfect example of exactly what I, when I reflect, believe we need to do a whole lot less of as a community🤣
I think it’s also good to be nice on the forum and I think Lorenzo nailed this balance perfectly. Their comment was friendly and kind, with a suggested replacement term, but still made me feel like using unnecessary jargon was a bad thing (making using unnecessary jargon feel like something I shouldn’t have done which will likely make my subconscious less likely to instinctively want to use unnecessary jargon in the future👌).
It’s just my general feeling on the forum recently that a few different groups of people are talking past each other sometimes and all saying valuable true things (but still, as always, people generally are good at finding common ground which is something I love about the EA community).
Really, I just really want everyone reading to understand where everyone else is coming from. This vaguely makes me want to be more precise when other people are saying the same thing in plain English. It also makes me want to optimise for accessibility when everyone else is saying something in technical jargon that is an idea that more people could get value from understanding.
Ideally I’d be a good enough at writing to be precise and accessible at the same time though (but both precision and making comments easier to understand for a broader group of readers is so time consuming so I often try to either do one or the other and sometimes I’m terrible and make a quick comment that is definitely neither 🤣).