I appreciate you seeking feedback here. A book targeted at the general public seems very well placed to shape the discussion in many unintended ways.
Evidently, the “AI kills” part is designed to be clickbait. We do hope to reach a wide audience, which we regard as desirable to accelerate the spread of AI ethics in all sorts of corporations. Evidently, we expect the title to be misused and abused by many people. But we are confident that a 3-minute discussion with a calm person is sufficient to convince them of the relevancy of the title (see Chapter 3).
I wonder why you think the sensationalist title is worth it.
Less sensationalist title will probably cause
less unproductive debate
lower risk for unintendedly causing the discussion around AI to go down a confused and politicized path
less people will read it
but the people that will read it are probably more informed and interested to have a sober discussion
Can you explain why you think that AI ethics being discussed in all sorts of corporations is very useful? My impression is that AI Safety mostly needs more academic talent directed to research, and not being another hot topic in corporations that don‘t even do AI research.
This is a good point. The book focuses a lot on research questions indeed.
We do see value in many corporations discussing AI ethics. In particular, there seems to be a rise of ethical discussions within the big tech companies, which we hope to encourage. In fact, in Chapter 7, we urge AI companies like Google and Facebook to, not only take part of the AI ethics discussion and research, but to actively motivate, organize and coordinate it, typically by sharing their AI ethics dilemmas and perhaps parts of their AI codes. In a sense, they already started to do so.
Another point is that, given our perceived urgency of AI Safety, it seems that it may be useful to reach out to academic talents in many different ways. Targeted discussions do improve the quality of the discussions. But we fear that they may not “scale” sufficiently. We feel that some academics might be quite receptive to reflecting on the public discussion. But we may be underestimating the difficulty to make this discussion productive...
(I have given a large number of public talks, and found it quite easy to raise the concerns of the book for all sorts of audiences, including start-ups / tech companies, but I do greatly fear what could happen with medias...)
I should add that the book really goes on and on to encourage calm thinking and fruitful discussions on the topic. We even added a section in Chapter 1, where we apologize for the title and clarify the purpose of the book. We also ask readers to be themselves pedagogical and benevolent when criticizing or defending the theses of the book. But clearly, such contents of the book will only have an impact on those who actually read the book.
Anyways, thanks for your comment. We’re definitely pondering it!
I appreciate you seeking feedback here. A book targeted at the general public seems very well placed to shape the discussion in many unintended ways.
I wonder why you think the sensationalist title is worth it. Less sensationalist title will probably cause
less unproductive debate
lower risk for unintendedly causing the discussion around AI to go down a confused and politicized path
less people will read it
but the people that will read it are probably more informed and interested to have a sober discussion
Can you explain why you think that AI ethics being discussed in all sorts of corporations is very useful? My impression is that AI Safety mostly needs more academic talent directed to research, and not being another hot topic in corporations that don‘t even do AI research.
This is a good point. The book focuses a lot on research questions indeed.
We do see value in many corporations discussing AI ethics. In particular, there seems to be a rise of ethical discussions within the big tech companies, which we hope to encourage. In fact, in Chapter 7, we urge AI companies like Google and Facebook to, not only take part of the AI ethics discussion and research, but to actively motivate, organize and coordinate it, typically by sharing their AI ethics dilemmas and perhaps parts of their AI codes. In a sense, they already started to do so.
Another point is that, given our perceived urgency of AI Safety, it seems that it may be useful to reach out to academic talents in many different ways. Targeted discussions do improve the quality of the discussions. But we fear that they may not “scale” sufficiently. We feel that some academics might be quite receptive to reflecting on the public discussion. But we may be underestimating the difficulty to make this discussion productive...
(I have given a large number of public talks, and found it quite easy to raise the concerns of the book for all sorts of audiences, including start-ups / tech companies, but I do greatly fear what could happen with medias...)
I should add that the book really goes on and on to encourage calm thinking and fruitful discussions on the topic. We even added a section in Chapter 1, where we apologize for the title and clarify the purpose of the book. We also ask readers to be themselves pedagogical and benevolent when criticizing or defending the theses of the book. But clearly, such contents of the book will only have an impact on those who actually read the book.
Anyways, thanks for your comment. We’re definitely pondering it!
Just registering that I’m not convinced this justifies the title.
Well, you were more than right to do so! You (and others) have convinced us. We changed the title of the book :)