X-risk: yes. The idea of fast AI development: yes. Knowing the phrase “takeoff speed”? No. For sure, this also depends a bit on the type of role and seniority. “Moral patienthood” strikes me as one of those terms where if someone is interested in one of our jobs, they will likely get the idea, but they might not know the term “moral patienthood”. So let’s note here that I wrote “language”, and you wrote “concepts”, and these are not the same. One of the distinctions I care about is that people understand, or can easily come to understand the ideas/concepts. I care less what specific words they use.
Digressing slightly, note that using specific language is a marker for group belonging, and people seem to find pleasure in using in-group language as this signals group belonging, even if there exists standard terms for the concepts. Oxytocin creates internal group belonging and at the same time exclusion towards outsiders. Language can do some of the same.
So yes, it’s important to me that people understand certain core concepts. But again, don’t overindex on me. I should’ve maybe clarified the following better in my first comment: I’ve personally thought that EA/AI safety groups have done a bit too much in-group hiring, so I set out how to figure out how to hire people more widely, and retain the same mission focus regardless.
Thanks for expanding! I appreciate the distinction between “language” and “concepts” as well as your thoughts on using language for in-group signaling and too much in-group hiring.
X-risk: yes. The idea of fast AI development: yes. Knowing the phrase “takeoff speed”? No. For sure, this also depends a bit on the type of role and seniority. “Moral patienthood” strikes me as one of those terms where if someone is interested in one of our jobs, they will likely get the idea, but they might not know the term “moral patienthood”. So let’s note here that I wrote “language”, and you wrote “concepts”, and these are not the same. One of the distinctions I care about is that people understand, or can easily come to understand the ideas/concepts. I care less what specific words they use.
Digressing slightly, note that using specific language is a marker for group belonging, and people seem to find pleasure in using in-group language as this signals group belonging, even if there exists standard terms for the concepts. Oxytocin creates internal group belonging and at the same time exclusion towards outsiders. Language can do some of the same.
So yes, it’s important to me that people understand certain core concepts. But again, don’t overindex on me. I should’ve maybe clarified the following better in my first comment: I’ve personally thought that EA/AI safety groups have done a bit too much in-group hiring, so I set out how to figure out how to hire people more widely, and retain the same mission focus regardless.
Thanks for expanding! I appreciate the distinction between “language” and “concepts” as well as your thoughts on using language for in-group signaling and too much in-group hiring.