One issue is: what’s the optimal degree of moral anger/outrage to express about a given issue that one’s morally passionate about? It probably depends a lot on the audience. Among Rationalist circles, any degree of anger may be seen as epistemically disqualifying, socially embarrassing, ethically dubious, etc. But among normal folks, if one’s arguing for an ethical position that they expect would be associated with a moderate amount of moral outrage (if one really believed what one was saying), then expressing that moderate level of outrage might be most persuasive. For example, a lot of political activism includes a level of expressed moral outrage that would look really silly and irrational to Rationalists, but that looks highly appropriate, persuasive, and legitimate to many onlookers. (For example, in protest marches, people aren’t typically acting as cool-headed as they would be at a Bay Area Rationalist meet-up—and it would look very strange if they were.)
Your second issue is even trickier: if it OK to induce strong moral outrage about an issue in people who don’t really understand the issue very deeply at a rational, evidence-based level? Well, that’s arguably about 98% of politics and activism and persuasion and public culture. If EA as a movement is going to position itself in an ethical leadership role on certain issues (such as AI risk), then we have to be willing to be leaders. This includes making decisions based on reasons and evidence and values and long-term thinking that most followers can’t understand, and don’t understand, and may never understand.
I don’t expect that the majority of humanity will ever be able to understand AI well enough (including deep learning, orthogonality, inner alignment, etc etc) to make well-informed decisions about AI X risk. Yet the majority of humanity will be affected by AI, and by any X risks it imposes. So, either EA people make our own best judgments about AI risk based on our assessments, and then try to persuade people of our conclusions (even if they don’t understand our reasoning), or.… what? We try to do cognitive enhancement of humanity until they can understand the issues as well as we do? We hope everybody gets a masters degree in machine learning? I don’t think we have the time.
I think we need to get comfortable with being ethical leaders on some of these issues—and that includes using methods of influence, persuasion, and outreach that might look very different from the kinds of persuasion that we use with each other.
Johannes—these are valid concerns, I think.
One issue is: what’s the optimal degree of moral anger/outrage to express about a given issue that one’s morally passionate about? It probably depends a lot on the audience. Among Rationalist circles, any degree of anger may be seen as epistemically disqualifying, socially embarrassing, ethically dubious, etc. But among normal folks, if one’s arguing for an ethical position that they expect would be associated with a moderate amount of moral outrage (if one really believed what one was saying), then expressing that moderate level of outrage might be most persuasive. For example, a lot of political activism includes a level of expressed moral outrage that would look really silly and irrational to Rationalists, but that looks highly appropriate, persuasive, and legitimate to many onlookers. (For example, in protest marches, people aren’t typically acting as cool-headed as they would be at a Bay Area Rationalist meet-up—and it would look very strange if they were.)
Your second issue is even trickier: if it OK to induce strong moral outrage about an issue in people who don’t really understand the issue very deeply at a rational, evidence-based level? Well, that’s arguably about 98% of politics and activism and persuasion and public culture. If EA as a movement is going to position itself in an ethical leadership role on certain issues (such as AI risk), then we have to be willing to be leaders. This includes making decisions based on reasons and evidence and values and long-term thinking that most followers can’t understand, and don’t understand, and may never understand.
I don’t expect that the majority of humanity will ever be able to understand AI well enough (including deep learning, orthogonality, inner alignment, etc etc) to make well-informed decisions about AI X risk. Yet the majority of humanity will be affected by AI, and by any X risks it imposes. So, either EA people make our own best judgments about AI risk based on our assessments, and then try to persuade people of our conclusions (even if they don’t understand our reasoning), or.… what? We try to do cognitive enhancement of humanity until they can understand the issues as well as we do? We hope everybody gets a masters degree in machine learning? I don’t think we have the time.
I think we need to get comfortable with being ethical leaders on some of these issues—and that includes using methods of influence, persuasion, and outreach that might look very different from the kinds of persuasion that we use with each other.