tcelferact—when posting about X risk issues, I agree that we should be careful about what kinds of emotions we accidentally or intentionally evoke in readers.
When facing major collective threats, humans, as hyper-social primates, have a fairly limited palette of emotions that can get evoked, and that motivate collective action to address those threats.
Probably the least useful emotions are despair, resignation, depression, generalized anxiety, and ‘black-pilled’ pessimism. These tend to be associated with curling up in a fetal position (metaphorically), and waiting passively for disaster, without doing much to prevent it. It’s a behavioral analog of ‘catatonia’ or ‘tonic immobility’ or ‘playing dead’. (Which can be useful in convincing a predator to lose interest, but wouldn’t be much use against OpenAI continuing to be reckless about AGI development.)
Possibly more useful are the kinds of emotions that motivate us to proactively rally others to our cause, to face the threat together. These emotions typically include anger, moral outrage, moral disgust, fury, wrath, indignation, a sense of betrayal, and a steely determination to hold the line against enemies. Of course, intense anger and moral outrage have some major downsides: they reinforce tribalism (us/then polarization), can motivate violence (that’s kinda one of their main purposes), and they can inhibit rational, objective analysis.
But I think on balance, EAs tend to err a bit too far in the direction of trying to maintain rational neutrality in the face of looming X risks, and trying too hard to avoid anger or outrage. The problem is, if we forbid ourselves from feeling anger/outrage (e.g. on the grounds that these are unseemly, aggressive, primitive, or stereotypically ‘conservative’ emotions), we’re not left with much beyond despair and depression.
In my view, if people in the AI industry are imposing outrageous X risks on all of us, then moral outrage is a perfectly appropriate response to them. We just have to learn how to integrate hot and strong emotions such as outrage with the objectivity, rationality, epistemic standards, and moral values of EAs.
I totally agree with Dr. Miller. When we talk about AI risks, it’s really important to find some balance between staying rational and acknowledging our emotions. Indeed feeling down or hopeless can make us passive, but being angry or morally outraged can push us to face challenges together. The trick being to use these emotions in a productive way while still sticking to our values and rational thinking.
I don’t object to folks vocalizing their outrage. I’d be skeptical of ‘outrage-only’ posts, but I think people expressing their outrage while describing what they are doing and wish the reader to do would be in line with what I’m requesting here.
tcelferact—when posting about X risk issues, I agree that we should be careful about what kinds of emotions we accidentally or intentionally evoke in readers.
When facing major collective threats, humans, as hyper-social primates, have a fairly limited palette of emotions that can get evoked, and that motivate collective action to address those threats.
Probably the least useful emotions are despair, resignation, depression, generalized anxiety, and ‘black-pilled’ pessimism. These tend to be associated with curling up in a fetal position (metaphorically), and waiting passively for disaster, without doing much to prevent it. It’s a behavioral analog of ‘catatonia’ or ‘tonic immobility’ or ‘playing dead’. (Which can be useful in convincing a predator to lose interest, but wouldn’t be much use against OpenAI continuing to be reckless about AGI development.)
Possibly more useful are the kinds of emotions that motivate us to proactively rally others to our cause, to face the threat together. These emotions typically include anger, moral outrage, moral disgust, fury, wrath, indignation, a sense of betrayal, and a steely determination to hold the line against enemies. Of course, intense anger and moral outrage have some major downsides: they reinforce tribalism (us/then polarization), can motivate violence (that’s kinda one of their main purposes), and they can inhibit rational, objective analysis.
But I think on balance, EAs tend to err a bit too far in the direction of trying to maintain rational neutrality in the face of looming X risks, and trying too hard to avoid anger or outrage. The problem is, if we forbid ourselves from feeling anger/outrage (e.g. on the grounds that these are unseemly, aggressive, primitive, or stereotypically ‘conservative’ emotions), we’re not left with much beyond despair and depression.
In my view, if people in the AI industry are imposing outrageous X risks on all of us, then moral outrage is a perfectly appropriate response to them. We just have to learn how to integrate hot and strong emotions such as outrage with the objectivity, rationality, epistemic standards, and moral values of EAs.
I totally agree with Dr. Miller. When we talk about AI risks, it’s really important to find some balance between staying rational and acknowledging our emotions. Indeed feeling down or hopeless can make us passive, but being angry or morally outraged can push us to face challenges together. The trick being to use these emotions in a productive way while still sticking to our values and rational thinking.
I don’t object to folks vocalizing their outrage. I’d be skeptical of ‘outrage-only’ posts, but I think people expressing their outrage while describing what they are doing and wish the reader to do would be in line with what I’m requesting here.