I agree with you that filtering for alignment is important. The mainstream non-profit space speaks a lot about filtering for “mission fit” and I think that’s a similar concept. Obviously it would be hard to run an animal advocacy org with someone chowing down on chicken sandwiches every day for lunch in the organization cafeteria.
But my hot take for the main place I see this go wrong in EA: Some EAs I have talked to, including some quite senior ones, overuse “this person may be longtermist-adjacent and seem to be well-meaning but they just don’t give me enough vibes that they’re x-risk motivated and no I did not actually ask them about it or press them about this” → “this is not a person I will work with” as a chain of reasoning, to the point of excluding people with nuanced views on longtermism (or just confused views who could learn and improve) and this makes the longtermist community more insular and worse. I think PELTIV and such give a similar take of making snap judgements from afar without actually checking them against reality (though there are other clear problems also).
My other take about where this goes wrong is less hot and basically amounts to “EA still ignores outside expertise too much because the experts don’t give off enough EA vibes”. If I recall correctly, nearly all opinions on wild animal welfare in EA had to be thrown out after discussion with relevant experts.
“this person may be longtermist-adjacent and seem to be well-meaning but they just don’t give me enough vibes that they’re x-risk motivated and no I did not actually ask them about it or press them about this”
Fortunately this can be fixed by publishing pamphlets with the correct sequences of words helpfully provided, and creating public knowledge that if you’re serious about longtermism you just need to whisper the correct sequence of words to the right person at the right time.
Jokes aside, there’s an actual threat of devolving into applause light factories (I’ll omit the rant about how the entire community building enterprise is on thin ice). Indeed, someone at Rethink Priorities once told me they weren’t convinced that the hiring process was doing a good job at separating “knows what they’re talking about, can reason about the problems we’re working on, cares about what we care about” from “ideological passwords, recitation of shibboleths”, or that it was one of the things they really wanted to get right and they weren’t confident they were getting right. It’s not exactly easy.
Yeah I certainly don’t think our hiring process is perfect at this either. These kinds of concerns weigh on me a lot and we’re constantly thinking about how we can get better.
I agree with you that filtering for alignment is important. The mainstream non-profit space speaks a lot about filtering for “mission fit” and I think that’s a similar concept. Obviously it would be hard to run an animal advocacy org with someone chowing down on chicken sandwiches every day for lunch in the organization cafeteria.
But my hot take for the main place I see this go wrong in EA: Some EAs I have talked to, including some quite senior ones, overuse “this person may be longtermist-adjacent and seem to be well-meaning but they just don’t give me enough vibes that they’re x-risk motivated and no I did not actually ask them about it or press them about this” → “this is not a person I will work with” as a chain of reasoning, to the point of excluding people with nuanced views on longtermism (or just confused views who could learn and improve) and this makes the longtermist community more insular and worse. I think PELTIV and such give a similar take of making snap judgements from afar without actually checking them against reality (though there are other clear problems also).
My other take about where this goes wrong is less hot and basically amounts to “EA still ignores outside expertise too much because the experts don’t give off enough EA vibes”. If I recall correctly, nearly all opinions on wild animal welfare in EA had to be thrown out after discussion with relevant experts.
Fortunately this can be fixed by publishing pamphlets with the correct sequences of words helpfully provided, and creating public knowledge that if you’re serious about longtermism you just need to whisper the correct sequence of words to the right person at the right time.
Jokes aside, there’s an actual threat of devolving into applause light factories (I’ll omit the rant about how the entire community building enterprise is on thin ice). Indeed, someone at Rethink Priorities once told me they weren’t convinced that the hiring process was doing a good job at separating “knows what they’re talking about, can reason about the problems we’re working on, cares about what we care about” from “ideological passwords, recitation of shibboleths”, or that it was one of the things they really wanted to get right and they weren’t confident they were getting right. It’s not exactly easy.
Yeah I certainly don’t think our hiring process is perfect at this either. These kinds of concerns weigh on me a lot and we’re constantly thinking about how we can get better.
I haven’t seen that but if that’s happening then I agree that’s bad and we should discourage it!