I don’t think that impression would be unfounded. In Julia Wise’s post from last August, she mentioned these trade-offs (among others):
Encourage the sharing of research and other work, even if the people producing it have done bad stuff personally
Don’t let people use EA to gain social status that they’ll use to do more bad stuff
Take the talent bottleneck seriously; don’t hamper hiring / projects too much
Take culture seriously; don’t create a culture where people can predictably get away with bad stuff if they’re also producing impact
This means, on the one hand, that the team is well aware of the potential consequences of doing naive impact calculations to decide on their actions. On the other hand, it means the impact of any decided policy for handling complaints, in terms of the work accused people are doing, is certainly taken into account.
More generally, it seems that the team does think of their endgoal as making the most positive impact (which fits what other CEA higher ups have said about the goals of the org as a whole), and creating a safe community is indeed just a means to that end.
This all makes me somewhat distrustful of the Community Health team.
I don’t think that impression would be unfounded. In Julia Wise’s post from last August, she mentioned these trade-offs (among others):
This means, on the one hand, that the team is well aware of the potential consequences of doing naive impact calculations to decide on their actions. On the other hand, it means the impact of any decided policy for handling complaints, in terms of the work accused people are doing, is certainly taken into account.
More generally, it seems that the team does think of their endgoal as making the most positive impact (which fits what other CEA higher ups have said about the goals of the org as a whole), and creating a safe community is indeed just a means to that end.
This all makes me somewhat distrustful of the Community Health team.