This comment feels important, like something I’ve been considering trying to spin into a full post. Finding a frame has been hard, because it feels like I’m trying to translate what’s (unfortunately) a distinctively non-EA culture norm into reasoning that EAs will take more seriously.
One thought that I do want to share though is that I don’t think seeing this as something that needs to be weighed against good epistemics feels quite right. I think our prizing good epistemics should mean being able to reason clearly and adjust our reactions to tone/emotional tenor from people who (very understandably!) are speaking from a place of trauma and deep hurt.
The best frame I have so far for a post is reminding people about Julia Galef’s straw-Vulcan argument and arguing what it implies for conversations on (understandably) incredibly emotionally heavy topics, and in tough times more generally. Roughly rehashing the argument because I can’t find a good link on it: Spock frequently makes assumptions that humans will be perfectly rational creatures under all circumstances, and when this leads him astray essentially shrugs and responds “it’s not my fault that I did not predict their actions correctly, they were being irrational!”. Galef’s point of course, is that this is horrible rationality: the failure to reason about how emotions might effect people and adjust accordingly means your epistemics are severely impoverished.
Setting aside the Klingon style rationality argument, there also feels like there should be a argument along the lines of how (to me, incredibly!) obvious it should be that tone like this demands sympathy and willingness to take on the burden of being accommodating from people serious about thinking of themselves as invested in altruism as a value. I’m still figuring out how to express this crisply (and to be clear, without bitterness) so that it will resonate.
If you have thoughts on what the best frame would be here, would love to hear any thoughts you have or discuss more.
Edited to take out something unkind. Trying to practice what I preach here.
I think this isn’t central to your point, but I wanted to push the “straw Vulcan” point a bit further. It’s not just that it’s rational to try to understand other people’s emotional behaviour, it’s that even your own emotional responses are frequently rational and that being attuned and responsive and reactive to your emotions is an important epistemic tool. When people hurt you it is rational to be angry, or sad; it is not rational to be ruled by these emotions, but ignoring them entirely is just as bad. Your emotions are a part of your sensory / observational experience of the world, just as much as your vision or your hearing are, and if you don’t acknowledge their role in your understanding, I think you will make worse predictions about what will happen.
This comment feels important, like something I’ve been considering trying to spin into a full post. Finding a frame has been hard, because it feels like I’m trying to translate what’s (unfortunately) a distinctively non-EA culture norm into reasoning that EAs will take more seriously.
One thought that I do want to share though is that I don’t think seeing this as something that needs to be weighed against good epistemics feels quite right. I think our prizing good epistemics should mean being able to reason clearly and adjust our reactions to tone/emotional tenor from people who (very understandably!) are speaking from a place of trauma and deep hurt.
The best frame I have so far for a post is reminding people about Julia Galef’s straw-Vulcan argument and arguing what it implies for conversations on (understandably) incredibly emotionally heavy topics, and in tough times more generally. Roughly rehashing the argument because I can’t find a good link on it: Spock frequently makes assumptions that humans will be perfectly rational creatures under all circumstances, and when this leads him astray essentially shrugs and responds “it’s not my fault that I did not predict their actions correctly, they were being irrational!”. Galef’s point of course, is that this is horrible rationality: the failure to reason about how emotions might effect people and adjust accordingly means your epistemics are severely impoverished.
Setting aside the Klingon style rationality argument, there also feels like there should be a argument along the lines of how (to me, incredibly!) obvious it should be that tone like this demands sympathy and willingness to take on the burden of being accommodating from people serious about thinking of themselves as invested in altruism as a value. I’m still figuring out how to express this crisply (and to be clear, without bitterness) so that it will resonate.
If you have thoughts on what the best frame would be here, would love to hear any thoughts you have or discuss more.
Edited to take out something unkind. Trying to practice what I preach here.
I think this isn’t central to your point, but I wanted to push the “straw Vulcan” point a bit further. It’s not just that it’s rational to try to understand other people’s emotional behaviour, it’s that even your own emotional responses are frequently rational and that being attuned and responsive and reactive to your emotions is an important epistemic tool. When people hurt you it is rational to be angry, or sad; it is not rational to be ruled by these emotions, but ignoring them entirely is just as bad. Your emotions are a part of your sensory / observational experience of the world, just as much as your vision or your hearing are, and if you don’t acknowledge their role in your understanding, I think you will make worse predictions about what will happen.