The debate over AI risk went mainstream in the last few weeks. It’s suddenly within the Overton window (the set of things considered reputable to discuss and believe in public).
This means we all need to raise our game in terms of public communication—especially around the topic of AI alignment and risks.
As you imply, this isn’t just a matter of building better communication skills (simplicity, clarity, logic, links, vividness, reading Pinker’s book, etc) -- crucial though those are. It’s also a matter of embracing broader public communication values—e.g. respect for the audience’s time, good will, and ability to contribute to the conversation, and respect for their varying levels of knowledge, backgrounds, levels of fatigue, distraction, and mood, neurodiversity, etc.
This is a crucial time when the EA, Rationalist, and X risk communities either make a positive & decisive impact on the public discourse—or when we confuse, alienate, and aggravate people.
One good heuristic when communicating on social media is to try to cultivate more self-aware about which technical terms, arguments, and writing styles are more intended to do status-signaling and virtue-signaling for one’s ingroup (e.g. other EAs), versus which are actually intended to communicate effectively with ordinary folks outside the ingroup. Almost always, there’s a tradeoff. What’s impressive to our EA peers typically won’t be persuasive outside EA. And what’s persuasive to normies on Twitter won’t be impressive to our EA peers.
We have to be willing to bite the bullet on this latter point. When AI risk suddenly comes into public consciousness, and we have a time-limited opportunity to shape the public discourse in helpful directions, is not the time to try to build one’s status and prestige within the movement by showing off how many obscure AI alignment terms one happens to know, or how clever a philosophical critique one can offer of some LessWrong post.
Nicholas—yes; strongly upvoted.
The debate over AI risk went mainstream in the last few weeks. It’s suddenly within the Overton window (the set of things considered reputable to discuss and believe in public).
This means we all need to raise our game in terms of public communication—especially around the topic of AI alignment and risks.
As you imply, this isn’t just a matter of building better communication skills (simplicity, clarity, logic, links, vividness, reading Pinker’s book, etc) -- crucial though those are. It’s also a matter of embracing broader public communication values—e.g. respect for the audience’s time, good will, and ability to contribute to the conversation, and respect for their varying levels of knowledge, backgrounds, levels of fatigue, distraction, and mood, neurodiversity, etc.
This is a crucial time when the EA, Rationalist, and X risk communities either make a positive & decisive impact on the public discourse—or when we confuse, alienate, and aggravate people.
One good heuristic when communicating on social media is to try to cultivate more self-aware about which technical terms, arguments, and writing styles are more intended to do status-signaling and virtue-signaling for one’s ingroup (e.g. other EAs), versus which are actually intended to communicate effectively with ordinary folks outside the ingroup. Almost always, there’s a tradeoff. What’s impressive to our EA peers typically won’t be persuasive outside EA. And what’s persuasive to normies on Twitter won’t be impressive to our EA peers.
We have to be willing to bite the bullet on this latter point. When AI risk suddenly comes into public consciousness, and we have a time-limited opportunity to shape the public discourse in helpful directions, is not the time to try to build one’s status and prestige within the movement by showing off how many obscure AI alignment terms one happens to know, or how clever a philosophical critique one can offer of some LessWrong post.