I don’t feel good about this situation, but I think your judgement is really different than most reads of what happened:
It’s clear to me that there’s someone who isn’t communicating or creating beliefs in a way that would be workable. Chris Leong’s comments seem objectively correct (if not likely to be useful).
(While committing this sin with this comment itself) It’s clearly better to walk away and leave them alone than risk stirring up another round of issues.
My comment very well may not be useful. I think there’s value in experimenting with different ways of engaging with people. I think it is possible to have these kind of conversations but I don’t think that I’ve quite managed to figure out how to do that yet.
I think the person involved is either having a specific negative personal incident, or revealing latent personality traits that suggest the situation is much less promising and below a reasonable bar for skilled intervention in a conversation.
With an willingness to be wrong and ignore norms, I think I could elaborate or make informative comments (maybe relevant of trust, scaling and dilution that seem to be major topics right now?). But it feels distasteful and inhumane to do this to one individual who is not an EA.
(I think EAs can and should endure much more, directly and publicly, and this seems like it would address would be problems with trust and scaling).
I don’t feel good about this situation, but I think your judgement is really different than most reads of what happened:
It’s clear to me that there’s someone who isn’t communicating or creating beliefs in a way that would be workable. Chris Leong’s comments seem objectively correct (if not likely to be useful).
(While committing this sin with this comment itself) It’s clearly better to walk away and leave them alone than risk stirring up another round of issues.
My comment very well may not be useful. I think there’s value in experimenting with different ways of engaging with people. I think it is possible to have these kind of conversations but I don’t think that I’ve quite managed to figure out how to do that yet.
I think the person involved is either having a specific negative personal incident, or revealing latent personality traits that suggest the situation is much less promising and below a reasonable bar for skilled intervention in a conversation.
With an willingness to be wrong and ignore norms, I think I could elaborate or make informative comments (maybe relevant of trust, scaling and dilution that seem to be major topics right now?). But it feels distasteful and inhumane to do this to one individual who is not an EA.
(I think EAs can and should endure much more, directly and publicly, and this seems like it would address would be problems with trust and scaling).