Sorry for any terseness I lack, and this may get out of scope or better placed in their original post’s comments. Keep in mind I’m not someone who’s opinion matters about this.
Plausibly, but who knows. Inclusivity failures are not an indictment. Sometimes knowing you disagree with an institution is a prediction that working together wouldn’t go super well.
As a baseline, recall that in normie capitalism hiring discrimination on alignment happens all the time. You have to at least pretend to care. Small orgs have higher standards of this “pretending to care” than large orgs (Gwern’s fermstimate of the proportion of amazon employees who “actually” care about same day delivery). Some would even say that to pull off working at a small org you have to actually care, and some would say that most EA orgs have more in common with startup than enterprise. But ConcernedEAs do care. They care enough to write a big post. So it’s looking good for them so far.
I probably converge with them on the idea that ideological purity and the accurate recitation of shibboleths is a very bad screening tool for any org. The more we have movement wide cohesion, the greater a threat this is.
So like, individual orgs should use their judgment and standards analogous to a startup avoiding hiring someone who openly doesn’t care about the customers or product. That doesn’t mean a deep amount of conformity.
So with the caveat that it depends a lot on the metric ton of variables that go into whether someone seems like a viable employee, in addition to their domain/object-level contributions/potential/expertise, and with deep and immense emphasis on all the reasons a hiring decision might not go through, I don’t think they’re disqualified from most projects. The point is that due to the nitty gritty, there may be some projects they’re disqualified from, and this is good and efficient. Rather, it would be good and efficient if they weren’t anonymous.
Sorry for any terseness I lack, and this may get out of scope or better placed in their original post’s comments. Keep in mind I’m not someone who’s opinion matters about this.
Plausibly, but who knows. Inclusivity failures are not an indictment. Sometimes knowing you disagree with an institution is a prediction that working together wouldn’t go super well.
As a baseline, recall that in normie capitalism hiring discrimination on alignment happens all the time. You have to at least pretend to care. Small orgs have higher standards of this “pretending to care” than large orgs (Gwern’s fermstimate of the proportion of amazon employees who “actually” care about same day delivery). Some would even say that to pull off working at a small org you have to actually care, and some would say that most EA orgs have more in common with startup than enterprise. But ConcernedEAs do care. They care enough to write a big post. So it’s looking good for them so far.
I probably converge with them on the idea that ideological purity and the accurate recitation of shibboleths is a very bad screening tool for any org. The more we have movement wide cohesion, the greater a threat this is.
So like, individual orgs should use their judgment and standards analogous to a startup avoiding hiring someone who openly doesn’t care about the customers or product. That doesn’t mean a deep amount of conformity.
So with the caveat that it depends a lot on the metric ton of variables that go into whether someone seems like a viable employee, in addition to their domain/object-level contributions/potential/expertise, and with deep and immense emphasis on all the reasons a hiring decision might not go through, I don’t think they’re disqualified from most projects. The point is that due to the nitty gritty, there may be some projects they’re disqualified from, and this is good and efficient. Rather, it would be good and efficient if they weren’t anonymous.