I’m not sure I agree with this. I agree that compassion is a good default, but I think that compassion needs to be extended to all the people who have been impacted by the FTX crisis, which will include many people in the ‘Dank EA Memes’ Facebook group. Humour can be a coping mechanism which will make some people feel better about bad situations:
“As predicted, individuals with a high sense of humor cognitively appraised less stress in the previous month than individuals with a low sense of humor and reported less current anxiety despite experiencing a similar number of everyday problems in the previous two months as those with a low sense of humor. These results support the view that humor positively affects the appraisal of stressful events and attenuates the negative affective response, and related to humor producing a cognitive affective shift and reduction in physiological arousal (Kuiper et al. 1993; Kuiper et al. 1995; Martin et al. 1993).
Maybe there is a way to use humour in a way that feels kinder, but I’ve personally yet to see anything since the FTX crisis started that could be defined as “compassionate” but also that made me laugh as much as those memes did.
While I agree that humour is a great de-stressor, I have faith in our ability to find alternative ways to entertain ourselves that don’t involve kicking someone while they’re down.
but I think that compassion needs to be extended to all the people who have been impacted by the FTX crisis
I agree with this. But I think compassion needs to be even further extended to every sentient beings who might be worse off because of this event (but this is assuming EA suffering from this event will be less good done in the world, and I recognize that there are people who seem to genuinely think that the world will be better if EA disappears from the world). And the implication of this is that we need to think about whether these mockeries and passionate outrage are good for these sentient beings.
It might be the case that some EAs who are doing mockery or passionate outrage are doing it as a way of damage control. But from a longer-term perspective, I am not sure these mechanisms are net-good, for the reason below.
On a more general level, it seems to me that trusting and following our social norms systematically and reliably leaves out most sentient beings who deserve our compassion (future people and nonhuman animals, nonhuman animals in general, potential digital beings). And anger/disgust as mechanisms for “enforcing ethics” seems to me to be particularly dubious, if not harmful, as they often also show anger/disgust to those who don’t show anger/disgust toward what most people think deserve anger or disgust, thereby reinforcing whatever norms that are already widely held, instead of an extension mechanism. Also, people can observe what things attract anger/disgust and what not, and I believe the observation will inevitably make some, if not many, people use that as evidence for how bad/important/urgent some issue is.
On a personal level, I have tried to move away from using anger or disgust to regulate my moral thinking and my actions, or as mechanisms to change the world, and I seemed to have had some success. I used to be extremely angry with people who know the suffering of factory-farmed animals but still choose to keep fueling it, and moderately disgusted with farmed animal advocates who somehow think the suffering of animals in nature is okay. But I no longer feel these emotions as strongly as I used to. And I have to admit, I don’t feel much emotional anger or disgust this time even though I think something very wrong likely has happened.
UPDATE: I saw Wixela’s comment above after finishing typing this. I agree with Wixela that EAs are sometimes better off feeling what we genuinely feel, especially given that EAs already have pretty widespread and strong norms on controlling emotions and letting rationality fix our instincts/emotions. But I stand by the view that anger/disgust as mechanisms of “enforcing ethics” is pretty dubious.
I’m not sure I agree with this. I agree that compassion is a good default, but I think that compassion needs to be extended to all the people who have been impacted by the FTX crisis, which will include many people in the ‘Dank EA Memes’ Facebook group. Humour can be a coping mechanism which will make some people feel better about bad situations:
Maybe there is a way to use humour in a way that feels kinder, but I’ve personally yet to see anything since the FTX crisis started that could be defined as “compassionate” but also that made me laugh as much as those memes did.
While I agree that humour is a great de-stressor, I have faith in our ability to find alternative ways to entertain ourselves that don’t involve kicking someone while they’re down.
I agree with this. But I think compassion needs to be even further extended to every sentient beings who might be worse off because of this event (but this is assuming EA suffering from this event will be less good done in the world, and I recognize that there are people who seem to genuinely think that the world will be better if EA disappears from the world). And the implication of this is that we need to think about whether these mockeries and passionate outrage are good for these sentient beings.
It might be the case that some EAs who are doing mockery or passionate outrage are doing it as a way of damage control. But from a longer-term perspective, I am not sure these mechanisms are net-good, for the reason below.
On a more general level, it seems to me that trusting and following our social norms systematically and reliably leaves out most sentient beings who deserve our compassion (future people and nonhuman animals, nonhuman animals in general, potential digital beings). And anger/disgust as mechanisms for “enforcing ethics” seems to me to be particularly dubious, if not harmful, as they often also show anger/disgust to those who don’t show anger/disgust toward what most people think deserve anger or disgust, thereby reinforcing whatever norms that are already widely held, instead of an extension mechanism. Also, people can observe what things attract anger/disgust and what not, and I believe the observation will inevitably make some, if not many, people use that as evidence for how bad/important/urgent some issue is.
On a personal level, I have tried to move away from using anger or disgust to regulate my moral thinking and my actions, or as mechanisms to change the world, and I seemed to have had some success. I used to be extremely angry with people who know the suffering of factory-farmed animals but still choose to keep fueling it, and moderately disgusted with farmed animal advocates who somehow think the suffering of animals in nature is okay. But I no longer feel these emotions as strongly as I used to. And I have to admit, I don’t feel much emotional anger or disgust this time even though I think something very wrong likely has happened.
UPDATE: I saw Wixela’s comment above after finishing typing this. I agree with Wixela that EAs are sometimes better off feeling what we genuinely feel, especially given that EAs already have pretty widespread and strong norms on controlling emotions and letting rationality fix our instincts/emotions. But I stand by the view that anger/disgust as mechanisms of “enforcing ethics” is pretty dubious.