Being relatively new to the EA community, this for me is the single biggest area of opportunity to make the community more impactful.
Communication within the EA community (and within the AI Safety community) is wonderful, clear, crisp, logical, calm, proportional. If only the rest of the world could communicate like that, how many problems we’d solve.
But unfortunately, many, maybe even most people, react to ideas emotionally, their gut reaction outweighing or even preventing any calm, logical analysis.
And it feels like a lot of people see EA’s as “cold and calculating” because of the way we communicate—with numbers and facts and rationale.
There is a whole science of communication (in which I’m far from an expert) which looks at how to make your message stick, how to use storytelling to build on humans’ natural desire to hear stories, how to use emotion-laden words and images instead of numbers, and so on.
For example: thousands of articles were written about the tragic and perilous way migrants would try to cross the Mediterranean to get to Europe. We all knew the facts. But few people acted. Then one photograph of a small boy who washed up dead on the beach almost single-handedly engaged millions of people to realise that this was inhumane, that we can’t let this go on. (in the end, it’s still going on). The photo was horrible and tragic, but was one of thousands of similar tragedies—yet this photo did more than all the numbers.
We could ask ourselves what kind of images might represent the dangers of AI in a similar emotional way. In 2001 A Space Odyssey, Stanley Kubrick achieved something like this. He captured the human experience of utter impotence to do anything against a very powerful AI. It was just one person, but we empathised with that person, just like we empathised with the tragic boy or with his parents and family.
What you’re describing is how others have used this form of communication—very likely fine-tuned in focus groups—to find out how to make their message as impactful as possible, as emotional as possible.
EA’s need to learn how to do this more. We need to separate the calm, logical discussion about what is the best course of action from the challenge of making our communication effective in bringing that about. There are some groups who do this quite well, but we are still amateurs compared to the (often bad guys) pushing alternative viewpoints using sophisticated psychology and analysis to fine-tune their messaging.
(full disclosure: this is part of what I’m studying for the project I’m doing for the BlueDot AI Safety course)
If I could give this post 20 upvotes I would.
Being relatively new to the EA community, this for me is the single biggest area of opportunity to make the community more impactful.
Communication within the EA community (and within the AI Safety community) is wonderful, clear, crisp, logical, calm, proportional. If only the rest of the world could communicate like that, how many problems we’d solve.
But unfortunately, many, maybe even most people, react to ideas emotionally, their gut reaction outweighing or even preventing any calm, logical analysis.
And it feels like a lot of people see EA’s as “cold and calculating” because of the way we communicate—with numbers and facts and rationale.
There is a whole science of communication (in which I’m far from an expert) which looks at how to make your message stick, how to use storytelling to build on humans’ natural desire to hear stories, how to use emotion-laden words and images instead of numbers, and so on.
For example: thousands of articles were written about the tragic and perilous way migrants would try to cross the Mediterranean to get to Europe. We all knew the facts. But few people acted. Then one photograph of a small boy who washed up dead on the beach almost single-handedly engaged millions of people to realise that this was inhumane, that we can’t let this go on. (in the end, it’s still going on). The photo was horrible and tragic, but was one of thousands of similar tragedies—yet this photo did more than all the numbers.
We could ask ourselves what kind of images might represent the dangers of AI in a similar emotional way. In 2001 A Space Odyssey, Stanley Kubrick achieved something like this. He captured the human experience of utter impotence to do anything against a very powerful AI. It was just one person, but we empathised with that person, just like we empathised with the tragic boy or with his parents and family.
What you’re describing is how others have used this form of communication—very likely fine-tuned in focus groups—to find out how to make their message as impactful as possible, as emotional as possible.
EA’s need to learn how to do this more. We need to separate the calm, logical discussion about what is the best course of action from the challenge of making our communication effective in bringing that about. There are some groups who do this quite well, but we are still amateurs compared to the (often bad guys) pushing alternative viewpoints using sophisticated psychology and analysis to fine-tune their messaging.
(full disclosure: this is part of what I’m studying for the project I’m doing for the BlueDot AI Safety course)