Thanks, this is a good followup. I’m glad my comment contained useful feedback for you.
I think your attempt to help Anthony went awry when he asked you why his tone was the bigger issue than whether he had been misrepresented, and you did not even seem to consider that he could be right in your reply. Perhaps he is right? Perhaps not? But it’s important to at least genuinely consider that he could be.
Thank you for recognizing that my concern was not addressed. I should mention, I am also not operating from an assumption of ‘intrinsically against me’ - it’s an unusually specific reaction that I’ve received on this forum, in particular. So, I’m glad that you have spoken-up in favor of due consideration. My stomach knots thank you :)
I don’t feel good about this situation, but I think your judgement is really different than most reads of what happened:
It’s clear to me that there’s someone who isn’t communicating or creating beliefs in a way that would be workable. Chris Leong’s comments seem objectively correct (if not likely to be useful).
(While committing this sin with this comment itself) It’s clearly better to walk away and leave them alone than risk stirring up another round of issues.
My comment very well may not be useful. I think there’s value in experimenting with different ways of engaging with people. I think it is possible to have these kind of conversations but I don’t think that I’ve quite managed to figure out how to do that yet.
I think the person involved is either having a specific negative personal incident, or revealing latent personality traits that suggest the situation is much less promising and below a reasonable bar for skilled intervention in a conversation.
With an willingness to be wrong and ignore norms, I think I could elaborate or make informative comments (maybe relevant of trust, scaling and dilution that seem to be major topics right now?). But it feels distasteful and inhumane to do this to one individual who is not an EA.
(I think EAs can and should endure much more, directly and publicly, and this seems like it would address would be problems with trust and scaling).
Thanks, this is a good followup. I’m glad my comment contained useful feedback for you.
I think your attempt to help Anthony went awry when he asked you why his tone was the bigger issue than whether he had been misrepresented, and you did not even seem to consider that he could be right in your reply. Perhaps he is right? Perhaps not? But it’s important to at least genuinely consider that he could be.
Thank you for recognizing that my concern was not addressed. I should mention, I am also not operating from an assumption of ‘intrinsically against me’ - it’s an unusually specific reaction that I’ve received on this forum, in particular. So, I’m glad that you have spoken-up in favor of due consideration. My stomach knots thank you :)
I don’t feel good about this situation, but I think your judgement is really different than most reads of what happened:
It’s clear to me that there’s someone who isn’t communicating or creating beliefs in a way that would be workable. Chris Leong’s comments seem objectively correct (if not likely to be useful).
(While committing this sin with this comment itself) It’s clearly better to walk away and leave them alone than risk stirring up another round of issues.
My comment very well may not be useful. I think there’s value in experimenting with different ways of engaging with people. I think it is possible to have these kind of conversations but I don’t think that I’ve quite managed to figure out how to do that yet.
I think the person involved is either having a specific negative personal incident, or revealing latent personality traits that suggest the situation is much less promising and below a reasonable bar for skilled intervention in a conversation.
With an willingness to be wrong and ignore norms, I think I could elaborate or make informative comments (maybe relevant of trust, scaling and dilution that seem to be major topics right now?). But it feels distasteful and inhumane to do this to one individual who is not an EA.
(I think EAs can and should endure much more, directly and publicly, and this seems like it would address would be problems with trust and scaling).
That’s useful feedback. I agree that it would have been better for me to engage with that more.
Glad to have been helpful :)