In a good faith discussion, one should be primarily concerned with whether or not their message is true, not what effect it will have on their audience.
Agreed, although I might be much less optimistic about how often this applies. Lots of communication comes before good faith discussion—lots of messages reach busy people who have to quickly decide whether your ideas are even worth engaging with in good faith. And if your ideas are presented in ways that look silly, many potential allies won’t have the time or interest to consider your arguments. This seems especially relevant in this context because there’s an uphill battle to fight—lots of ML engineers and tech policy folks are already skeptical of these concerns.
(That doesn’t mean communication should be false—there’s much room to improve a true message’s effects by just improving how it’s framed. In this case, given that there’s both similarities and differences between a field’s concerns and sci-fi movie’s concerns, emphasizing the differences might make sense.)
(On top of the objections you mentioned, I think another reason why it’s risky to emphasize similarities to a movie is that people might think you’re worried about stuff because you saw it in a sci-fi movie.)
Yeah, you’re right actually, that paragraph is a little too idealistic.
As a practical measure, I think it cuts both ways. Some people will hear “yes, like Terminator” and roll their eyes. Some people will hear “no, not like Terminator”, get bored, and tune out. Embracing the comparison is helpful, in part, because it lets you quickly establish the stakes. The best path is probably somewhere in the middle, and dependent on the audience and context.
Agreed, although I might be much less optimistic about how often this applies. Lots of communication comes before good faith discussion—lots of messages reach busy people who have to quickly decide whether your ideas are even worth engaging with in good faith. And if your ideas are presented in ways that look silly, many potential allies won’t have the time or interest to consider your arguments. This seems especially relevant in this context because there’s an uphill battle to fight—lots of ML engineers and tech policy folks are already skeptical of these concerns.
(That doesn’t mean communication should be false—there’s much room to improve a true message’s effects by just improving how it’s framed. In this case, given that there’s both similarities and differences between a field’s concerns and sci-fi movie’s concerns, emphasizing the differences might make sense.)
(On top of the objections you mentioned, I think another reason why it’s risky to emphasize similarities to a movie is that people might think you’re worried about stuff because you saw it in a sci-fi movie.)
Yeah, you’re right actually, that paragraph is a little too idealistic.
As a practical measure, I think it cuts both ways. Some people will hear “yes, like Terminator” and roll their eyes. Some people will hear “no, not like Terminator”, get bored, and tune out. Embracing the comparison is helpful, in part, because it lets you quickly establish the stakes. The best path is probably somewhere in the middle, and dependent on the audience and context.
Overall I think it’s just about finding that balance.