This is a good point. The book focuses a lot on research questions indeed.
We do see value in many corporations discussing AI ethics. In particular, there seems to be a rise of ethical discussions within the big tech companies, which we hope to encourage. In fact, in Chapter 7, we urge AI companies like Google and Facebook to, not only take part of the AI ethics discussion and research, but to actively motivate, organize and coordinate it, typically by sharing their AI ethics dilemmas and perhaps parts of their AI codes. In a sense, they already started to do so.
Another point is that, given our perceived urgency of AI Safety, it seems that it may be useful to reach out to academic talents in many different ways. Targeted discussions do improve the quality of the discussions. But we fear that they may not “scale” sufficiently. We feel that some academics might be quite receptive to reflecting on the public discussion. But we may be underestimating the difficulty to make this discussion productive...
(I have given a large number of public talks, and found it quite easy to raise the concerns of the book for all sorts of audiences, including start-ups / tech companies, but I do greatly fear what could happen with medias...)
I should add that the book really goes on and on to encourage calm thinking and fruitful discussions on the topic. We even added a section in Chapter 1, where we apologize for the title and clarify the purpose of the book. We also ask readers to be themselves pedagogical and benevolent when criticizing or defending the theses of the book. But clearly, such contents of the book will only have an impact on those who actually read the book.
Anyways, thanks for your comment. We’re definitely pondering it!
This is a good point. The book focuses a lot on research questions indeed.
We do see value in many corporations discussing AI ethics. In particular, there seems to be a rise of ethical discussions within the big tech companies, which we hope to encourage. In fact, in Chapter 7, we urge AI companies like Google and Facebook to, not only take part of the AI ethics discussion and research, but to actively motivate, organize and coordinate it, typically by sharing their AI ethics dilemmas and perhaps parts of their AI codes. In a sense, they already started to do so.
Another point is that, given our perceived urgency of AI Safety, it seems that it may be useful to reach out to academic talents in many different ways. Targeted discussions do improve the quality of the discussions. But we fear that they may not “scale” sufficiently. We feel that some academics might be quite receptive to reflecting on the public discussion. But we may be underestimating the difficulty to make this discussion productive...
(I have given a large number of public talks, and found it quite easy to raise the concerns of the book for all sorts of audiences, including start-ups / tech companies, but I do greatly fear what could happen with medias...)
I should add that the book really goes on and on to encourage calm thinking and fruitful discussions on the topic. We even added a section in Chapter 1, where we apologize for the title and clarify the purpose of the book. We also ask readers to be themselves pedagogical and benevolent when criticizing or defending the theses of the book. But clearly, such contents of the book will only have an impact on those who actually read the book.
Anyways, thanks for your comment. We’re definitely pondering it!
Just registering that I’m not convinced this justifies the title.
Well, you were more than right to do so! You (and others) have convinced us. We changed the title of the book :)