I’m not sure if what you say is correct. Maybe. I think there is one difficulty that needs to be taken into account, which is that I i think it is hard to elicit the appropriate reaction. When I see people arguing angrily, I am normally biased against what they say is correct. So I need to make an effort to take them more seriously than I would otherwise do. So it is unclear to me which percentage of people moral outrage would even affect in the way that we want it to affect them.
There’s also another issue. Maybe when you are emotionally outraged, it will induce moral outrage in other people. Would it be a good thing to create lots of people who don’t really understand the underlying arguments but are really outraged and vocal about the position of AGI being an existential risk? i expect most of these people will not be very good at arguing correctly for AGI being an existential risk. They will make the position look bad and will make other people less likely to take it seriously in the future. Or at least this is one of many hypothetical risks I see.
One issue is: what’s the optimal degree of moral anger/outrage to express about a given issue that one’s morally passionate about? It probably depends a lot on the audience. Among Rationalist circles, any degree of anger may be seen as epistemically disqualifying, socially embarrassing, ethically dubious, etc. But among normal folks, if one’s arguing for an ethical position that they expect would be associated with a moderate amount of moral outrage (if one really believed what one was saying), then expressing that moderate level of outrage might be most persuasive. For example, a lot of political activism includes a level of expressed moral outrage that would look really silly and irrational to Rationalists, but that looks highly appropriate, persuasive, and legitimate to many onlookers. (For example, in protest marches, people aren’t typically acting as cool-headed as they would be at a Bay Area Rationalist meet-up—and it would look very strange if they were.)
Your second issue is even trickier: if it OK to induce strong moral outrage about an issue in people who don’t really understand the issue very deeply at a rational, evidence-based level? Well, that’s arguably about 98% of politics and activism and persuasion and public culture. If EA as a movement is going to position itself in an ethical leadership role on certain issues (such as AI risk), then we have to be willing to be leaders. This includes making decisions based on reasons and evidence and values and long-term thinking that most followers can’t understand, and don’t understand, and may never understand.
I don’t expect that the majority of humanity will ever be able to understand AI well enough (including deep learning, orthogonality, inner alignment, etc etc) to make well-informed decisions about AI X risk. Yet the majority of humanity will be affected by AI, and by any X risks it imposes. So, either EA people make our own best judgments about AI risk based on our assessments, and then try to persuade people of our conclusions (even if they don’t understand our reasoning), or.… what? We try to do cognitive enhancement of humanity until they can understand the issues as well as we do? We hope everybody gets a masters degree in machine learning? I don’t think we have the time.
I think we need to get comfortable with being ethical leaders on some of these issues—and that includes using methods of influence, persuasion, and outreach that might look very different from the kinds of persuasion that we use with each other.
I’m not sure if what you say is correct. Maybe. I think there is one difficulty that needs to be taken into account, which is that I i think it is hard to elicit the appropriate reaction. When I see people arguing angrily, I am normally biased against what they say is correct. So I need to make an effort to take them more seriously than I would otherwise do. So it is unclear to me which percentage of people moral outrage would even affect in the way that we want it to affect them.
There’s also another issue. Maybe when you are emotionally outraged, it will induce moral outrage in other people. Would it be a good thing to create lots of people who don’t really understand the underlying arguments but are really outraged and vocal about the position of AGI being an existential risk? i expect most of these people will not be very good at arguing correctly for AGI being an existential risk. They will make the position look bad and will make other people less likely to take it seriously in the future. Or at least this is one of many hypothetical risks I see.
Johannes—these are valid concerns, I think.
One issue is: what’s the optimal degree of moral anger/outrage to express about a given issue that one’s morally passionate about? It probably depends a lot on the audience. Among Rationalist circles, any degree of anger may be seen as epistemically disqualifying, socially embarrassing, ethically dubious, etc. But among normal folks, if one’s arguing for an ethical position that they expect would be associated with a moderate amount of moral outrage (if one really believed what one was saying), then expressing that moderate level of outrage might be most persuasive. For example, a lot of political activism includes a level of expressed moral outrage that would look really silly and irrational to Rationalists, but that looks highly appropriate, persuasive, and legitimate to many onlookers. (For example, in protest marches, people aren’t typically acting as cool-headed as they would be at a Bay Area Rationalist meet-up—and it would look very strange if they were.)
Your second issue is even trickier: if it OK to induce strong moral outrage about an issue in people who don’t really understand the issue very deeply at a rational, evidence-based level? Well, that’s arguably about 98% of politics and activism and persuasion and public culture. If EA as a movement is going to position itself in an ethical leadership role on certain issues (such as AI risk), then we have to be willing to be leaders. This includes making decisions based on reasons and evidence and values and long-term thinking that most followers can’t understand, and don’t understand, and may never understand.
I don’t expect that the majority of humanity will ever be able to understand AI well enough (including deep learning, orthogonality, inner alignment, etc etc) to make well-informed decisions about AI X risk. Yet the majority of humanity will be affected by AI, and by any X risks it imposes. So, either EA people make our own best judgments about AI risk based on our assessments, and then try to persuade people of our conclusions (even if they don’t understand our reasoning), or.… what? We try to do cognitive enhancement of humanity until they can understand the issues as well as we do? We hope everybody gets a masters degree in machine learning? I don’t think we have the time.
I think we need to get comfortable with being ethical leaders on some of these issues—and that includes using methods of influence, persuasion, and outreach that might look very different from the kinds of persuasion that we use with each other.