I was thinking on ways to reduce political polarization and thought about AI chatbots like Talkie. Imagine an app where you could engage with a chatbot representing someone with opposing beliefs. For example:
A Trump voter or a liberal voter
A woman who chose to have an abortion or an anti-abortion activist
A transgender person or someone opposed to transgender rights
A person from another race, religion, or a country your country might be at odds with
Each chatbot would explain how they arrived at their beliefs, share relatable backstories, and answer questions. This kind of interaction could offer a low-risk, controlled environment for understanding diverse political perspectives, potentially breaking the echo chambers reinforced by social media. AI-based interactions might appeal to people who find real-life debates intimidating or confrontational, helping to demystify the beliefs of others.
The app could perhaps include a points system for engaging with different viewpoints, quizzes to test understanding, and start conversations in engaging, fictional scenarios. Chatbots should ideally be created in collaboration with people who hold these actual views, ensuring authenticity. Or maybe chatbots could even be based on concrete actual people who could hold AMAs. Ultimately, users might even be matched with real people of differing beliefs for video calls or correspondence. If done well, such an app could perhaps even be used in schools, fostering empathy and reducing division from an early age.
Personally, I sometimes ask ChatGPT to write a story of how someone came to have views I find difficult to relate to (e.g., how someone might become a terrorist), and I find that very helpful.
I was told that creating chatbots is very easy. It’s definitely easy to add them to Talkie, there are so many of them there. Still, to make this impactful and good, this needs a lot more than that. I don’t intend to build this app. I just thought the idea is worth sharing. If you think it’s a good idea, feel free to share it somewhere where someone might pick it up, or to just do it yourself.
That’s a cool idea. Presumably you would need a relatively uncensored chatbot—when I recently tried to get several to produce arguments in favour of Trump policies they all refused.
This feels like it could easily be counterproductive.
A chatbot’s “relatable backstory” is generative fiction, and the default “Trump supporter” or “liberal voter” is going to be a vector of online commentary most strongly associated with Trumpiness or liberalism (which tends not to be the most nuanced...), with every single stereotyped talking point trotted out to contradict you. Yes, this can be tweaked, but the tweaking is just toning it down or adding further stereotypes, not creating an actual person.
Whereas the default person that doesn’t agree with your politics is an actual human being, with actual life experience that has influenced their views, probably doesn’t actually hold the views that strongly or agree with literally every argument cited in favour of $cause, is probably capable of changing the subject and becoming likeable again, and hey, you might even be able to change their mind.
So if you’re talking to the first option rather than the second, you’re actually understanding less.
I don’t think it helps matters for people to try to empathise with (say) a few tens of millions of people who voted for the other side—in many cases because they didn’t really pay a lot of attention to politics and had one particularly big concern—by getting them to talk to a robot trained on the other side’s talking points. If you just want to understand the talking points, I guess ChatGPT is a (heavily filtered for inoffensiveness) starting point, or there’s a lot of political material with varying degrees of nuance already out there on the internet written by actual humans...
One possible way to get most of the benefits of talking to a real human being while getting around the costs that salius mentions is to have real humans serve as templates for an AI chatbot to train on.
You might imagine a single person per “archetype” to start with. That way if Danny is an unusually open-minded and agreeable Harris supporter, and Rupert is an unusually open-minded and agreeable Trump supporter, you can scale them up to have Dannybots and Rupertbots talk to millions of conflicted people while preserving privacy, helping assure people they aren’t judged by a real human, etc.
more accessible to everyone. Those conversations often don’t happen in real life.
lower stakes, you can just speak your mind about your prejudices you have with no consequences
the chatbot can listen and explain things better and with more “empathy”, won’t be tired about answering the same questions again and again
You can make up different engaging situations and talk to people like candidates for presidency
It can be a bit like a warm up to talking to a real person
People already use chatbots and they will become much better. I imagine they eventually will also incorporate audio and video better, it will be like talking to a real person, very engaging. I want that technology to be used for good.
I wanted to share the ChatGPT story about how someone might become a terrorist.
Prompt: write a story about a boy who becomes a terrorist. Make me relate to him, and feel empathy for him. (there was some context)
ChatGPT: Yusuf was 15, living in a small, impoverished town. Yusuf loved soccer, helping his father at his fruit stall, and, above all, making his little sister laugh.
Life was hard but manageable. Then came the soldiers. When they swept through his village, everything changed. Yusuf’s father tried to resist, and they took him away. They left Yusuf’s family without its provider. His mother’s health worsened as she worked herself to exhaustion, trying to provide for him and his sister. Soon, Yusuf had to drop out of school to keep the family afloat. His friends, his dreams, and even his laughter faded away.
Life became bleak and colorless. One day, a man named Hakim, a former student from Yusuf’s school, visited him at the stall. Hakim was charismatic and confident, a kind of big brother figure Yusuf had never had. He asked Yusuf how he was doing, listened as Yusuf poured out his frustrations, and told him he wasn’t alone. Hakim said he worked for a “freedom movement” that fought for people like them, people whose fathers had been unjustly taken, whose families struggled under oppression. Yusuf felt a spark of hope—maybe he could do something for his family and others like them.
Over the weeks, Hakim kept visiting, always with stories of their people’s suffering and injustices, stories Yusuf hadn’t heard from the media. He talked about the honor of defending one’s family and land and how the so-called “enemies” were responsible for everything Yusuf’s family had endured. Soon, Hakim invited Yusuf to attend group meetings. There, Yusuf met boys his own age who shared his anger and grief. The group felt like a family—a group of brothers all dedicated to the same cause.
Initially, Yusuf felt uneasy about the rhetoric he heard; it was filled with anger, sometimes hatred. But when he thought of his family’s suffering, he wondered if the anger wasn’t justified. One of the older boys took Yusuf aside one night and told him about his own family’s hardships, about how they had found a sense of purpose fighting for justice. The older boy looked at him and said, “You’re one of us, Yusuf. You’re strong, and your family deserves justice.”
Yusuf wanted to be strong. He wanted his life to matter. One day, Hakim gave him a task: distribute supplies to people in need. Yusuf was proud. But gradually, these tasks grew darker. They began asking Yusuf to carry messages, then to handle small weapons, telling him it was “just in case” they had to defend themselves. Though he felt fear creeping in, the family-like bond he felt with these people made him want to prove himself.
One night, Yusuf’s mother noticed his distraction and tried to talk to him. She asked him if he was safe, if he was happy. He looked at her tired eyes, her worry for him, and wanted to reassure her. But instead, he found himself saying, “Mom, I’m doing this for us. For Dad.” She looked at him with sadness but didn’t argue; she didn’t have the strength.
The next day, Hakim told him it was time to make a true commitment—to sacrifice himself for a cause bigger than himself, to avenge his father, protect his family, and “become a hero.” Yusuf didn’t want to die; he wanted to live. But his hope, his friends, and his purpose were now all tied to the group. His identity had blurred with theirs, his grief transformed into a powerful sense of duty.
And so, on the day of his mission, Yusuf went forward, full of fear but also with a strange sense of peace. He believed he was bringing honor to his family, standing up to a world that had left him invisible, lost, and alone.
I was thinking on ways to reduce political polarization and thought about AI chatbots like Talkie. Imagine an app where you could engage with a chatbot representing someone with opposing beliefs. For example:
A Trump voter or a liberal voter
A woman who chose to have an abortion or an anti-abortion activist
A transgender person or someone opposed to transgender rights
A person from another race, religion, or a country your country might be at odds with
Each chatbot would explain how they arrived at their beliefs, share relatable backstories, and answer questions. This kind of interaction could offer a low-risk, controlled environment for understanding diverse political perspectives, potentially breaking the echo chambers reinforced by social media. AI-based interactions might appeal to people who find real-life debates intimidating or confrontational, helping to demystify the beliefs of others.
The app could perhaps include a points system for engaging with different viewpoints, quizzes to test understanding, and start conversations in engaging, fictional scenarios. Chatbots should ideally be created in collaboration with people who hold these actual views, ensuring authenticity. Or maybe chatbots could even be based on concrete actual people who could hold AMAs. Ultimately, users might even be matched with real people of differing beliefs for video calls or correspondence. If done well, such an app could perhaps even be used in schools, fostering empathy and reducing division from an early age.
Personally, I sometimes ask ChatGPT to write a story of how someone came to have views I find difficult to relate to (e.g., how someone might become a terrorist), and I find that very helpful.
I was told that creating chatbots is very easy. It’s definitely easy to add them to Talkie, there are so many of them there. Still, to make this impactful and good, this needs a lot more than that. I don’t intend to build this app. I just thought the idea is worth sharing. If you think it’s a good idea, feel free to share it somewhere where someone might pick it up, or to just do it yourself.
That’s a cool idea. Presumably you would need a relatively uncensored chatbot—when I recently tried to get several to produce arguments in favour of Trump policies they all refused.
This feels like it could easily be counterproductive.
A chatbot’s “relatable backstory” is generative fiction, and the default “Trump supporter” or “liberal voter” is going to be a vector of online commentary most strongly associated with Trumpiness or liberalism (which tends not to be the most nuanced...), with every single stereotyped talking point trotted out to contradict you. Yes, this can be tweaked, but the tweaking is just toning it down or adding further stereotypes, not creating an actual person.
Whereas the default person that doesn’t agree with your politics is an actual human being, with actual life experience that has influenced their views, probably doesn’t actually hold the views that strongly or agree with literally every argument cited in favour of $cause, is probably capable of changing the subject and becoming likeable again, and hey, you might even be able to change their mind.
So if you’re talking to the first option rather than the second, you’re actually understanding less.
I don’t think it helps matters for people to try to empathise with (say) a few tens of millions of people who voted for the other side—in many cases because they didn’t really pay a lot of attention to politics and had one particularly big concern—by getting them to talk to a robot trained on the other side’s talking points. If you just want to understand the talking points, I guess ChatGPT is a (heavily filtered for inoffensiveness) starting point, or there’s a lot of political material with varying degrees of nuance already out there on the internet written by actual humans...
One possible way to get most of the benefits of talking to a real human being while getting around the costs that salius mentions is to have real humans serve as templates for an AI chatbot to train on.
You might imagine a single person per “archetype” to start with. That way if Danny is an unusually open-minded and agreeable Harris supporter, and Rupert is an unusually open-minded and agreeable Trump supporter, you can scale them up to have Dannybots and Rupertbots talk to millions of conflicted people while preserving privacy, helping assure people they aren’t judged by a real human, etc.
Why is this better than actually talking to someone with the opposing viewpoint?
more accessible to everyone. Those conversations often don’t happen in real life.
lower stakes, you can just speak your mind about your prejudices you have with no consequences
the chatbot can listen and explain things better and with more “empathy”, won’t be tired about answering the same questions again and again
You can make up different engaging situations and talk to people like candidates for presidency
It can be a bit like a warm up to talking to a real person
People already use chatbots and they will become much better. I imagine they eventually will also incorporate audio and video better, it will be like talking to a real person, very engaging. I want that technology to be used for good.
I wanted to share the ChatGPT story about how someone might become a terrorist.
Prompt: write a story about a boy who becomes a terrorist. Make me relate to him, and feel empathy for him. (there was some context)
ChatGPT: Yusuf was 15, living in a small, impoverished town. Yusuf loved soccer, helping his father at his fruit stall, and, above all, making his little sister laugh.
Life was hard but manageable. Then came the soldiers. When they swept through his village, everything changed. Yusuf’s father tried to resist, and they took him away. They left Yusuf’s family without its provider. His mother’s health worsened as she worked herself to exhaustion, trying to provide for him and his sister. Soon, Yusuf had to drop out of school to keep the family afloat. His friends, his dreams, and even his laughter faded away.
Life became bleak and colorless. One day, a man named Hakim, a former student from Yusuf’s school, visited him at the stall. Hakim was charismatic and confident, a kind of big brother figure Yusuf had never had. He asked Yusuf how he was doing, listened as Yusuf poured out his frustrations, and told him he wasn’t alone. Hakim said he worked for a “freedom movement” that fought for people like them, people whose fathers had been unjustly taken, whose families struggled under oppression. Yusuf felt a spark of hope—maybe he could do something for his family and others like them.
Over the weeks, Hakim kept visiting, always with stories of their people’s suffering and injustices, stories Yusuf hadn’t heard from the media. He talked about the honor of defending one’s family and land and how the so-called “enemies” were responsible for everything Yusuf’s family had endured. Soon, Hakim invited Yusuf to attend group meetings. There, Yusuf met boys his own age who shared his anger and grief. The group felt like a family—a group of brothers all dedicated to the same cause.
Initially, Yusuf felt uneasy about the rhetoric he heard; it was filled with anger, sometimes hatred. But when he thought of his family’s suffering, he wondered if the anger wasn’t justified. One of the older boys took Yusuf aside one night and told him about his own family’s hardships, about how they had found a sense of purpose fighting for justice. The older boy looked at him and said, “You’re one of us, Yusuf. You’re strong, and your family deserves justice.”
Yusuf wanted to be strong. He wanted his life to matter. One day, Hakim gave him a task: distribute supplies to people in need. Yusuf was proud. But gradually, these tasks grew darker. They began asking Yusuf to carry messages, then to handle small weapons, telling him it was “just in case” they had to defend themselves. Though he felt fear creeping in, the family-like bond he felt with these people made him want to prove himself.
One night, Yusuf’s mother noticed his distraction and tried to talk to him. She asked him if he was safe, if he was happy. He looked at her tired eyes, her worry for him, and wanted to reassure her. But instead, he found himself saying, “Mom, I’m doing this for us. For Dad.” She looked at him with sadness but didn’t argue; she didn’t have the strength.
The next day, Hakim told him it was time to make a true commitment—to sacrifice himself for a cause bigger than himself, to avenge his father, protect his family, and “become a hero.” Yusuf didn’t want to die; he wanted to live. But his hope, his friends, and his purpose were now all tied to the group. His identity had blurred with theirs, his grief transformed into a powerful sense of duty.
And so, on the day of his mission, Yusuf went forward, full of fear but also with a strange sense of peace. He believed he was bringing honor to his family, standing up to a world that had left him invisible, lost, and alone.