I’ve been thinking about whether there is some kind of informal court or arbitration system that would allow the social pressure here to be less driven by people trying to individually enact social enforcement
My model has been there should be social enforcement for both poor epistemic practices and rude/unkind communication.
I have been an active commenter in both posts, with a goal of social pressure in mind (i.e. providing accountability and a social pressure to not behave inappropriately towards/with your employees).
I’d be interested to hear meta level criticisms of my approach (e.g. “social pressure is inherently bad”). Because, whilst I don’t want witch hunting that employs poor epistemic practices, I do think social pressure plays an important role in stabilising communities. Perhaps someone can change my mind on this? If you do change my mind, I’ll certainly comment a lot less.
To me it seems like everyone individually applying social pressure is hard to calibrate. Oli seems to be saying that he and Ben did not intend the level of social consequences NL has felt based on what they shared, but rather an update that NL shoudn’t be a trusted EA org. I think that it’s hard to control the impression that people will get when you provide a lot of evidence even if it’s all relatively minor, and almost impossible to control snowballing dynamics in comment sections and on social media when people fear being judged for the wrong reaction, so it just might not be possible for a post like Ben’s to received in a calibrated way.
This sounds right, but the counterfactual (no social accountability) seems worse to me, so I am operating on the assumption it’s a necessary evil.
I live high trust country, which has very little of this social accountability, i.e. if someone does something potentially rude or unacceptable in public, they are given the benefit of the doubt. However, I expect this works because others are employed, full time, to hold people accountable. I.e. police officers, ticket inspectors, traffic wardens. I don’t think we have this in the wider Effective Altruism community right now.
My model has been there should be social enforcement for both poor epistemic practices and rude/unkind communication.
I have been an active commenter in both posts, with a goal of social pressure in mind (i.e. providing accountability and a social pressure to not behave inappropriately towards/with your employees).
I’d be interested to hear meta level criticisms of my approach (e.g. “social pressure is inherently bad”). Because, whilst I don’t want witch hunting that employs poor epistemic practices, I do think social pressure plays an important role in stabilising communities. Perhaps someone can change my mind on this? If you do change my mind, I’ll certainly comment a lot less.
To me it seems like everyone individually applying social pressure is hard to calibrate. Oli seems to be saying that he and Ben did not intend the level of social consequences NL has felt based on what they shared, but rather an update that NL shoudn’t be a trusted EA org. I think that it’s hard to control the impression that people will get when you provide a lot of evidence even if it’s all relatively minor, and almost impossible to control snowballing dynamics in comment sections and on social media when people fear being judged for the wrong reaction, so it just might not be possible for a post like Ben’s to received in a calibrated way.
This sounds right, but the counterfactual (no social accountability) seems worse to me, so I am operating on the assumption it’s a necessary evil.
I live high trust country, which has very little of this social accountability, i.e. if someone does something potentially rude or unacceptable in public, they are given the benefit of the doubt. However, I expect this works because others are employed, full time, to hold people accountable. I.e. police officers, ticket inspectors, traffic wardens. I don’t think we have this in the wider Effective Altruism community right now.