Sure but then you need to make a case for why you would prioritise this over anything else that you think has good consequences, I think the com health statement tries to make that argument (though it’s not fully specified) whereas a statement like “we want to do x because x is bad” doesn’t really help me understand why they want to prioritise x.
Okay, I feel like we need to rewind a bit. The problem is that people who have experienced behaviour like harrassment are getting the impression from that document that EA health might ignore their complaint depending on how “effective” the bad actor in question is, based on some naive EV calculation.
Now I’m assuming this impression is mistaken, in which case literally all they need to do is update the document to make it clear they don’t tolerate bad behaviour, whoever it comes from. This costs 0$.
I don’t think that impression would be unfounded. In Julia Wise’s post from last August, she mentioned these trade-offs (among others):
Encourage the sharing of research and other work, even if the people producing it have done bad stuff personally
Don’t let people use EA to gain social status that they’ll use to do more bad stuff
Take the talent bottleneck seriously; don’t hamper hiring / projects too much
Take culture seriously; don’t create a culture where people can predictably get away with bad stuff if they’re also producing impact
This means, on the one hand, that the team is well aware of the potential consequences of doing naive impact calculations to decide on their actions. On the other hand, it means the impact of any decided policy for handling complaints, in terms of the work accused people are doing, is certainly taken into account.
More generally, it seems that the team does think of their endgoal as making the most positive impact (which fits what other CEA higher ups have said about the goals of the org as a whole), and creating a safe community is indeed just a means to that end.
This all makes me somewhat distrustful of the Community Health team.
Sure but then you need to make a case for why you would prioritise this over anything else that you think has good consequences, I think the com health statement tries to make that argument (though it’s not fully specified) whereas a statement like “we want to do x because x is bad” doesn’t really help me understand why they want to prioritise x.
Okay, I feel like we need to rewind a bit. The problem is that people who have experienced behaviour like harrassment are getting the impression from that document that EA health might ignore their complaint depending on how “effective” the bad actor in question is, based on some naive EV calculation.
Now I’m assuming this impression is mistaken, in which case literally all they need to do is update the document to make it clear they don’t tolerate bad behaviour, whoever it comes from. This costs 0$.
I don’t think that impression would be unfounded. In Julia Wise’s post from last August, she mentioned these trade-offs (among others):
This means, on the one hand, that the team is well aware of the potential consequences of doing naive impact calculations to decide on their actions. On the other hand, it means the impact of any decided policy for handling complaints, in terms of the work accused people are doing, is certainly taken into account.
More generally, it seems that the team does think of their endgoal as making the most positive impact (which fits what other CEA higher ups have said about the goals of the org as a whole), and creating a safe community is indeed just a means to that end.
This all makes me somewhat distrustful of the Community Health team.