it’s not important to me that people know a lot about in-group language, people, or events around AI safety
I can see that people and events are less important, but as far as concepts go, I presume it would be important for them to know at least some of the terms, such as x/s risk, moral patienthood, recursive self-improvement, take-off speed, etc.
As far as I know, really none of these are widely known outside of the AIS community, or do you mean something else by in-group language?
X-risk: yes. The idea of fast AI development: yes. Knowing the phrase “takeoff speed”? No. For sure, this also depends a bit on the type of role and seniority. “Moral patienthood” strikes me as one of those terms where if someone is interested in one of our jobs, they will likely get the idea, but they might not know the term “moral patienthood”. So let’s note here that I wrote “language”, and you wrote “concepts”, and these are not the same. One of the distinctions I care about is that people understand, or can easily come to understand the ideas/concepts. I care less what specific words they use.
Digressing slightly, note that using specific language is a marker for group belonging, and people seem to find pleasure in using in-group language as this signals group belonging, even if there exists standard terms for the concepts. Oxytocin creates internal group belonging and at the same time exclusion towards outsiders. Language can do some of the same.
So yes, it’s important to me that people understand certain core concepts. But again, don’t overindex on me. I should’ve maybe clarified the following better in my first comment: I’ve personally thought that EA/AI safety groups have done a bit too much in-group hiring, so I set out how to figure out how to hire people more widely, and retain the same mission focus regardless.
Thanks for expanding! I appreciate the distinction between “language” and “concepts” as well as your thoughts on using language for in-group signaling and too much in-group hiring.
To follow up on this:
I can see that people and events are less important, but as far as concepts go, I presume it would be important for them to know at least some of the terms, such as x/s risk, moral patienthood, recursive self-improvement, take-off speed, etc.
As far as I know, really none of these are widely known outside of the AIS community, or do you mean something else by in-group language?
X-risk: yes. The idea of fast AI development: yes. Knowing the phrase “takeoff speed”? No. For sure, this also depends a bit on the type of role and seniority. “Moral patienthood” strikes me as one of those terms where if someone is interested in one of our jobs, they will likely get the idea, but they might not know the term “moral patienthood”. So let’s note here that I wrote “language”, and you wrote “concepts”, and these are not the same. One of the distinctions I care about is that people understand, or can easily come to understand the ideas/concepts. I care less what specific words they use.
Digressing slightly, note that using specific language is a marker for group belonging, and people seem to find pleasure in using in-group language as this signals group belonging, even if there exists standard terms for the concepts. Oxytocin creates internal group belonging and at the same time exclusion towards outsiders. Language can do some of the same.
So yes, it’s important to me that people understand certain core concepts. But again, don’t overindex on me. I should’ve maybe clarified the following better in my first comment: I’ve personally thought that EA/AI safety groups have done a bit too much in-group hiring, so I set out how to figure out how to hire people more widely, and retain the same mission focus regardless.
Thanks for expanding! I appreciate the distinction between “language” and “concepts” as well as your thoughts on using language for in-group signaling and too much in-group hiring.