tl;dr: I suspect selecting for speaking fluent LessWrong jargon is anti-correlated with being exceptionally good at ops.
Double-cruxing in the comments is so satisfying to read. I’ve outlined another possible related component in this comment that this post might have been pointing to (I find the below gets easily conflated in my head unless I do work to tease out the distinct pieces and so I thought this might plausibly be happening a bit here too).
I am curious how much actual values matter versus being good at EA/rationalist signalling mattering.
Eg. I think people take what I say a lot more seriously when I use jargon compared to when I say the same core idea in plain English or with a more accessible framing (which usually is a lot more work: saying what I believe is true in language I think a smart newcomer with a different academic background could understand is hard but often seems worth it to me).
I can imagine someone who gets vibed as a “normie” because of the language and framing they use (because they’ve spent more time in the real world than the EA world where different wording gets social reinforcement) being dismissed despite caring about the long-term future and believing AI is a really big deal. Would you even get many applicants to orgs you’d be hiring at that didn’t buy into this at least a little?
The reason I find the use of language being a bigger deal than actual values plausible is that I have humanities friends who are quicker than me at picking up new concepts (teaching maths to them is both extremely fun and a tiny bit depressing because they pick things up much quicker than I ever did, turns out group theory is easy, who knew? As an aside, group theory is my favourite thing to teach to smart people with very little maths background because there are quite a few hard-ish exercises that require very little pre-requisite knowledge).
They have great epistemics in conversations with me where there are high enough levels of trust that very different worldviews don’t make “the other person is trolling/talking in bad faith because they can’t see what just obviously logically follows” the number 1 most plausible hypothesis when everyone is talking past each other.
I couldn’t bring them to an EA event because when I simulate their experience, I see them being dismissed as having poor reasoning skills because they don’t speak fluent STEM or analytic philosophy.
I know that speaking rationalist is a good signal—I think speaking like a LessWronger tells you the person has probably read a lot of LessWrong and, therefore, you are likely to have a lot of common ground with them. However, it does limit the pool of candidates a lot if finding LessWrong fun to read is a prerequisite requirement (which probably requires both being exceptionally analytical and exceptionally unbothered by confrontational communication styles, among probably other personal traits that might not correlate well with ops skills).
I suspect for ops roles, being exceptionally analytical might not be fundamentally needed to be amazing (and I don’t find it implausible that being exceptionally analytical is anti-correlated with being mindblowingly incredible at ops).
I don’t normally think you should select for speaking fluent LessWrong jargon, and I have advocated for hiring senior ops staff who have read relatively little LessWrong.
Great (and also unsurprising so I’m now trying to work out why I felt the need to write the initial comment)
I think I wrote the initial comment less because I expected anyone to reflectively disagree and more because I think we all make snap judgements that maybe take conscious effort to notice and question.
I don’t expect anyone to advocate for people because they speak more jargon (largely because I think very highly of people in this community). I do expect it to be harder to understand someone who comes from a different cultural bubble and, therefore, harder to work out if they are aligned with your values enough. Jargon often gives precision that makes people more legible. Also human beings are pretty instinctively tribal and we naturally trust people who indicate in some way (e.g. in their language) they are more like us. I think it’s also easy for these things to get conflated (it’s hard to tell where a gut feeling comes from and once we have a gut feeling, we naturally are way more likely to have supporting arguments pop into our heads than opposing ones).
Anyway, I feel there is something I’m pointing to even if I’ve failed to articulate it.
Obviously EA hiring is pretty good because big things are getting accomplished and have already happened. I probably should have said initially that this does feel quite marginal. My guess as an outsider is that hiring is, overall, done quite a bit better than at the median non-profit organisation.
I think the reason it’s tempting to criticize EA orgs is because we’re all more invested in them being as good as they can possibly can be and so want to point out perceived flaws to improve them (though this instinct might often be counter-productive because it takes up scarce attention, so sorry about that!).
Part of why I think selecting for this trait is particularly bad if you want to find amazing ops people is I have a hunch that executive dysfunction is very bad for ops roles. I also suspect that executive dysfunction makes it easier to trick people in conversation/interviews into thinking you’re really smart and good at analytic reasoning.
I actually think that executive dysfunction and interest in EA are so well correlated that we could use ASD and ADHD diagnostic tools to identify people who are pre-disposed to falling down the EA/LessWrong/longtermism rabbit-hole. (I’m only half-joking, I legitimately think finding people without any executive dysfunctional traits who are interested in EA might be the root cause of some talent bottlenecks)
I am biased because I think I’m quite good at “tricking” people into thinking I’m smart in conversation and that halo-effecting me in EA has been bad and could have been a lot worse (cos at age 20 you just believe people when they tell you your doubts are imposter syndrome and not just self-awareness).
Don’t get me wrong, I love me (well, I actually have a very up and down relationship with myself but I generally have a lot more ups than downs and I have an incredible psychologist so hopefully I won’t need a caveat like this in future 🤣🤞).
I just think that competency is more multi-dimensional than many rationalist people seem to alieve.
tl;dr: I suspect selecting for speaking fluent LessWrong jargon is anti-correlated with being exceptionally good at ops.
Double-cruxing in the comments is so satisfying to read. I’ve outlined another possible related component in this comment that this post might have been pointing to (I find the below gets easily conflated in my head unless I do work to tease out the distinct pieces and so I thought this might plausibly be happening a bit here too).
I am curious how much actual values matter versus being good at EA/rationalist signalling mattering.
Eg. I think people take what I say a lot more seriously when I use jargon compared to when I say the same core idea in plain English or with a more accessible framing (which usually is a lot more work: saying what I believe is true in language I think a smart newcomer with a different academic background could understand is hard but often seems worth it to me).
I can imagine someone who gets vibed as a “normie” because of the language and framing they use (because they’ve spent more time in the real world than the EA world where different wording gets social reinforcement) being dismissed despite caring about the long-term future and believing AI is a really big deal. Would you even get many applicants to orgs you’d be hiring at that didn’t buy into this at least a little?
The reason I find the use of language being a bigger deal than actual values plausible is that I have humanities friends who are quicker than me at picking up new concepts (teaching maths to them is both extremely fun and a tiny bit depressing because they pick things up much quicker than I ever did, turns out group theory is easy, who knew? As an aside, group theory is my favourite thing to teach to smart people with very little maths background because there are quite a few hard-ish exercises that require very little pre-requisite knowledge).
They have great epistemics in conversations with me where there are high enough levels of trust that very different worldviews don’t make “the other person is trolling/talking in bad faith because they can’t see what just obviously logically follows” the number 1 most plausible hypothesis when everyone is talking past each other.
I couldn’t bring them to an EA event because when I simulate their experience, I see them being dismissed as having poor reasoning skills because they don’t speak fluent STEM or analytic philosophy.
I know that speaking rationalist is a good signal—I think speaking like a LessWronger tells you the person has probably read a lot of LessWrong and, therefore, you are likely to have a lot of common ground with them. However, it does limit the pool of candidates a lot if finding LessWrong fun to read is a prerequisite requirement (which probably requires both being exceptionally analytical and exceptionally unbothered by confrontational communication styles, among probably other personal traits that might not correlate well with ops skills).
I suspect for ops roles, being exceptionally analytical might not be fundamentally needed to be amazing (and I don’t find it implausible that being exceptionally analytical is anti-correlated with being mindblowingly incredible at ops).
I don’t normally think you should select for speaking fluent LessWrong jargon, and I have advocated for hiring senior ops staff who have read relatively little LessWrong.
Great (and also unsurprising so I’m now trying to work out why I felt the need to write the initial comment)
I think I wrote the initial comment less because I expected anyone to reflectively disagree and more because I think we all make snap judgements that maybe take conscious effort to notice and question.
I don’t expect anyone to advocate for people because they speak more jargon (largely because I think very highly of people in this community). I do expect it to be harder to understand someone who comes from a different cultural bubble and, therefore, harder to work out if they are aligned with your values enough. Jargon often gives precision that makes people more legible. Also human beings are pretty instinctively tribal and we naturally trust people who indicate in some way (e.g. in their language) they are more like us. I think it’s also easy for these things to get conflated (it’s hard to tell where a gut feeling comes from and once we have a gut feeling, we naturally are way more likely to have supporting arguments pop into our heads than opposing ones).
Anyway, I feel there is something I’m pointing to even if I’ve failed to articulate it.
Obviously EA hiring is pretty good because big things are getting accomplished and have already happened. I probably should have said initially that this does feel quite marginal. My guess as an outsider is that hiring is, overall, done quite a bit better than at the median non-profit organisation.
I think the reason it’s tempting to criticize EA orgs is because we’re all more invested in them being as good as they can possibly can be and so want to point out perceived flaws to improve them (though this instinct might often be counter-productive because it takes up scarce attention, so sorry about that!).
I have related thoughts on over-selecting for one single good-but-not-the-be-all-and-end-all trait (being exceptionally analytic) in the EA community in response to the ridiculously competent CEO of GWWC’s comment here: https://forum.effectivealtruism.org/posts/x9Rn5SfapcbbZaZy9/ea-for-dumb-people?commentId=noHWAsztWeJvijbGC
Part of why I think selecting for this trait is particularly bad if you want to find amazing ops people is I have a hunch that executive dysfunction is very bad for ops roles. I also suspect that executive dysfunction makes it easier to trick people in conversation/interviews into thinking you’re really smart and good at analytic reasoning.
I actually think that executive dysfunction and interest in EA are so well correlated that we could use ASD and ADHD diagnostic tools to identify people who are pre-disposed to falling down the EA/LessWrong/longtermism rabbit-hole. (I’m only half-joking, I legitimately think finding people without any executive dysfunctional traits who are interested in EA might be the root cause of some talent bottlenecks)
I am biased because I think I’m quite good at “tricking” people into thinking I’m smart in conversation and that halo-effecting me in EA has been bad and could have been a lot worse (cos at age 20 you just believe people when they tell you your doubts are imposter syndrome and not just self-awareness).
Don’t get me wrong, I love me (well, I actually have a very up and down relationship with myself but I generally have a lot more ups than downs and I have an incredible psychologist so hopefully I won’t need a caveat like this in future 🤣🤞).
I just think that competency is more multi-dimensional than many rationalist people seem to alieve.