Johannes—thanks for sharing a useful perspective. I think in many cases, you’re right that a kind of cool, resigned, mindful, courage in the face of likely doom can be mentally healthy for individuals working on X risk issues. Like the chill of a samurai warrior who tries to face every battle as if his body was already dead—the principle of hagakure. If our goal is to maximize the amount of X risk reduction research we can do as individuals, it can make sense to find some equanimity while living under the shadow of personal death and species-level extinction.
However, in many contexts, I think that a righteous fury at people who witlessly impose X risks on the rest of us can also be psychologically healthy. As a parent, I’m motivated to protect my kids, by almost any means necessary, against X risks. As a citizen, I feel moral outrage against politicians who ignore X risks. As a researcher, righteous fury against X risks makes me feel more motivated to band together with other, equally infuriated, like-minded researchers, rather than suffering doomy hopelessness alone.
Also, insofar as moral stigma against dangerous technologies (e.g. AI, bioweapons, nukes) might be a powerful way to fight those X risks, righteous anti-doom fury and moral outrage might be more effective than chill resignation. Moral outrage tends to spark moral stigma, which might be exactly what we need to slow the development of dangerous technologies.
Of course, moral outrage tends to erode epistemic integrity, motivates confirmation bias, reinforces tribalism, can provoke violence (e.g. Butlerian jihads), etc. So there are downsides, but in some contexts, the ethical leadership power and social coordination benefits of moral outrage might outweigh those.
Hagakure is, I think, a useful concept and technique to know. Thank you for telling me about it. I think it is different from what I was describing in this article, but it seems like a technique that you could layer on top. I haven’t really done it a lot yet, though I guess there is a good chance that it will work.
I can definitely see that being outraged can be useful on the individual and the societal level. However, I think the major challenge is to steer the outrage correctly. As you say, epidemics can easily go under. I encourage everybody who draws motivation from outrage, to still think carefully through the reasoning for why they are outraged. These should be reasons such that if you would tell them to a neutral curious observer, the reasons alone would be enough to convince them of the thing (without the communication being optimized to convince).
Johannes—I agree that it can be important to try to maintain epistemic integrity even if one feels deep moral outrage about something.
However, there are many circumstances in which people won’t take empirically & logically valid arguments about important topics seriously if they’re not expressed with an authentic degree of outrage. This is less often the case within EA culture. But it’s frequently the case in public discourse.
It seems that Eliezer Yudkowsky, for example, has often (for over 20 years) tried to express his concerns about AI X-risk fairly dispassionately. But he’s often encountered people saying ’If you really took your own arguments seriously, you’d express a lot more moral outrage, and willingness to use traditional human channels for expressing and implementing outrage, such as calls for moral stigmatization of AI, outlawing AI, ostracizing practitioners of AI, etc. (But then, of course, when he does actually argue that nation-states should be willing to enforce a hypothetical global moratorium on AI using the standard military intervention methods (e.g. drone strikes) that are routinely used to enforce international agreements in every other domain, people act all outraged, as if he’s preaching Butlerian Jihad. Sometimes you just can’t win....)
Anyway, if normal folks see a disconnect between (1) valid arguments that a certain thing X is really really bad and we should reduce it, and (2) a conspicuous lack of passionate moral outrage about X on the part of the arguer, then they will often infer that the arguer doesn’t really believe their own argument, i.e. they’re treating it as a purely speculative thought experiment, or they’re arguing in bad faith, or they’re trolling us, etc.
This is a very difficult issue to resolve, but I expect it to be increasingly important as EAs discuss practical ways to slow down AI capability development relative to AI alignment efforts.
I’m not sure if what you say is correct. Maybe. I think there is one difficulty that needs to be taken into account, which is that I i think it is hard to elicit the appropriate reaction. When I see people arguing angrily, I am normally biased against what they say is correct. So I need to make an effort to take them more seriously than I would otherwise do. So it is unclear to me which percentage of people moral outrage would even affect in the way that we want it to affect them.
There’s also another issue. Maybe when you are emotionally outraged, it will induce moral outrage in other people. Would it be a good thing to create lots of people who don’t really understand the underlying arguments but are really outraged and vocal about the position of AGI being an existential risk? i expect most of these people will not be very good at arguing correctly for AGI being an existential risk. They will make the position look bad and will make other people less likely to take it seriously in the future. Or at least this is one of many hypothetical risks I see.
One issue is: what’s the optimal degree of moral anger/outrage to express about a given issue that one’s morally passionate about? It probably depends a lot on the audience. Among Rationalist circles, any degree of anger may be seen as epistemically disqualifying, socially embarrassing, ethically dubious, etc. But among normal folks, if one’s arguing for an ethical position that they expect would be associated with a moderate amount of moral outrage (if one really believed what one was saying), then expressing that moderate level of outrage might be most persuasive. For example, a lot of political activism includes a level of expressed moral outrage that would look really silly and irrational to Rationalists, but that looks highly appropriate, persuasive, and legitimate to many onlookers. (For example, in protest marches, people aren’t typically acting as cool-headed as they would be at a Bay Area Rationalist meet-up—and it would look very strange if they were.)
Your second issue is even trickier: if it OK to induce strong moral outrage about an issue in people who don’t really understand the issue very deeply at a rational, evidence-based level? Well, that’s arguably about 98% of politics and activism and persuasion and public culture. If EA as a movement is going to position itself in an ethical leadership role on certain issues (such as AI risk), then we have to be willing to be leaders. This includes making decisions based on reasons and evidence and values and long-term thinking that most followers can’t understand, and don’t understand, and may never understand.
I don’t expect that the majority of humanity will ever be able to understand AI well enough (including deep learning, orthogonality, inner alignment, etc etc) to make well-informed decisions about AI X risk. Yet the majority of humanity will be affected by AI, and by any X risks it imposes. So, either EA people make our own best judgments about AI risk based on our assessments, and then try to persuade people of our conclusions (even if they don’t understand our reasoning), or.… what? We try to do cognitive enhancement of humanity until they can understand the issues as well as we do? We hope everybody gets a masters degree in machine learning? I don’t think we have the time.
I think we need to get comfortable with being ethical leaders on some of these issues—and that includes using methods of influence, persuasion, and outreach that might look very different from the kinds of persuasion that we use with each other.
Johannes—thanks for sharing a useful perspective. I think in many cases, you’re right that a kind of cool, resigned, mindful, courage in the face of likely doom can be mentally healthy for individuals working on X risk issues. Like the chill of a samurai warrior who tries to face every battle as if his body was already dead—the principle of hagakure. If our goal is to maximize the amount of X risk reduction research we can do as individuals, it can make sense to find some equanimity while living under the shadow of personal death and species-level extinction.
However, in many contexts, I think that a righteous fury at people who witlessly impose X risks on the rest of us can also be psychologically healthy. As a parent, I’m motivated to protect my kids, by almost any means necessary, against X risks. As a citizen, I feel moral outrage against politicians who ignore X risks. As a researcher, righteous fury against X risks makes me feel more motivated to band together with other, equally infuriated, like-minded researchers, rather than suffering doomy hopelessness alone.
Also, insofar as moral stigma against dangerous technologies (e.g. AI, bioweapons, nukes) might be a powerful way to fight those X risks, righteous anti-doom fury and moral outrage might be more effective than chill resignation. Moral outrage tends to spark moral stigma, which might be exactly what we need to slow the development of dangerous technologies.
Of course, moral outrage tends to erode epistemic integrity, motivates confirmation bias, reinforces tribalism, can provoke violence (e.g. Butlerian jihads), etc. So there are downsides, but in some contexts, the ethical leadership power and social coordination benefits of moral outrage might outweigh those.
Hagakure is, I think, a useful concept and technique to know. Thank you for telling me about it. I think it is different from what I was describing in this article, but it seems like a technique that you could layer on top. I haven’t really done it a lot yet, though I guess there is a good chance that it will work.
I can definitely see that being outraged can be useful on the individual and the societal level. However, I think the major challenge is to steer the outrage correctly. As you say, epidemics can easily go under. I encourage everybody who draws motivation from outrage, to still think carefully through the reasoning for why they are outraged. These should be reasons such that if you would tell them to a neutral curious observer, the reasons alone would be enough to convince them of the thing (without the communication being optimized to convince).
Johannes—I agree that it can be important to try to maintain epistemic integrity even if one feels deep moral outrage about something.
However, there are many circumstances in which people won’t take empirically & logically valid arguments about important topics seriously if they’re not expressed with an authentic degree of outrage. This is less often the case within EA culture. But it’s frequently the case in public discourse.
It seems that Eliezer Yudkowsky, for example, has often (for over 20 years) tried to express his concerns about AI X-risk fairly dispassionately. But he’s often encountered people saying ’If you really took your own arguments seriously, you’d express a lot more moral outrage, and willingness to use traditional human channels for expressing and implementing outrage, such as calls for moral stigmatization of AI, outlawing AI, ostracizing practitioners of AI, etc. (But then, of course, when he does actually argue that nation-states should be willing to enforce a hypothetical global moratorium on AI using the standard military intervention methods (e.g. drone strikes) that are routinely used to enforce international agreements in every other domain, people act all outraged, as if he’s preaching Butlerian Jihad. Sometimes you just can’t win....)
Anyway, if normal folks see a disconnect between (1) valid arguments that a certain thing X is really really bad and we should reduce it, and (2) a conspicuous lack of passionate moral outrage about X on the part of the arguer, then they will often infer that the arguer doesn’t really believe their own argument, i.e. they’re treating it as a purely speculative thought experiment, or they’re arguing in bad faith, or they’re trolling us, etc.
This is a very difficult issue to resolve, but I expect it to be increasingly important as EAs discuss practical ways to slow down AI capability development relative to AI alignment efforts.
I’m not sure if what you say is correct. Maybe. I think there is one difficulty that needs to be taken into account, which is that I i think it is hard to elicit the appropriate reaction. When I see people arguing angrily, I am normally biased against what they say is correct. So I need to make an effort to take them more seriously than I would otherwise do. So it is unclear to me which percentage of people moral outrage would even affect in the way that we want it to affect them.
There’s also another issue. Maybe when you are emotionally outraged, it will induce moral outrage in other people. Would it be a good thing to create lots of people who don’t really understand the underlying arguments but are really outraged and vocal about the position of AGI being an existential risk? i expect most of these people will not be very good at arguing correctly for AGI being an existential risk. They will make the position look bad and will make other people less likely to take it seriously in the future. Or at least this is one of many hypothetical risks I see.
Johannes—these are valid concerns, I think.
One issue is: what’s the optimal degree of moral anger/outrage to express about a given issue that one’s morally passionate about? It probably depends a lot on the audience. Among Rationalist circles, any degree of anger may be seen as epistemically disqualifying, socially embarrassing, ethically dubious, etc. But among normal folks, if one’s arguing for an ethical position that they expect would be associated with a moderate amount of moral outrage (if one really believed what one was saying), then expressing that moderate level of outrage might be most persuasive. For example, a lot of political activism includes a level of expressed moral outrage that would look really silly and irrational to Rationalists, but that looks highly appropriate, persuasive, and legitimate to many onlookers. (For example, in protest marches, people aren’t typically acting as cool-headed as they would be at a Bay Area Rationalist meet-up—and it would look very strange if they were.)
Your second issue is even trickier: if it OK to induce strong moral outrage about an issue in people who don’t really understand the issue very deeply at a rational, evidence-based level? Well, that’s arguably about 98% of politics and activism and persuasion and public culture. If EA as a movement is going to position itself in an ethical leadership role on certain issues (such as AI risk), then we have to be willing to be leaders. This includes making decisions based on reasons and evidence and values and long-term thinking that most followers can’t understand, and don’t understand, and may never understand.
I don’t expect that the majority of humanity will ever be able to understand AI well enough (including deep learning, orthogonality, inner alignment, etc etc) to make well-informed decisions about AI X risk. Yet the majority of humanity will be affected by AI, and by any X risks it imposes. So, either EA people make our own best judgments about AI risk based on our assessments, and then try to persuade people of our conclusions (even if they don’t understand our reasoning), or.… what? We try to do cognitive enhancement of humanity until they can understand the issues as well as we do? We hope everybody gets a masters degree in machine learning? I don’t think we have the time.
I think we need to get comfortable with being ethical leaders on some of these issues—and that includes using methods of influence, persuasion, and outreach that might look very different from the kinds of persuasion that we use with each other.