This feels like it could easily be counterproductive.
A chatbot’s “relatable backstory” is generative fiction, and the default “Trump supporter” or “liberal voter” is going to be a vector of online commentary most strongly associated with Trumpiness or liberalism (which tends not to be the most nuanced...), with every single stereotyped talking point trotted out to contradict you. Yes, this can be tweaked, but the tweaking is just toning it down or adding further stereotypes, not creating an actual person.
Whereas the default person that doesn’t agree with your politics is an actual human being, with actual life experience that has influenced their views, probably doesn’t actually hold the views that strongly or agree with literally every argument cited in favour of $cause, is probably capable of changing the subject and becoming likeable again, and hey, you might even be able to change their mind.
So if you’re talking to the first option rather than the second, you’re actually understanding less.
I don’t think it helps matters for people to try to empathise with (say) a few tens of millions of people who voted for the other side—in many cases because they didn’t really pay a lot of attention to politics and had one particularly big concern—by getting them to talk to a robot trained on the other side’s talking points. If you just want to understand the talking points, I guess ChatGPT is a (heavily filtered for inoffensiveness) starting point, or there’s a lot of political material with varying degrees of nuance already out there on the internet written by actual humans...
One possible way to get most of the benefits of talking to a real human being while getting around the costs that salius mentions is to have real humans serve as templates for an AI chatbot to train on.
You might imagine a single person per “archetype” to start with. That way if Danny is an unusually open-minded and agreeable Harris supporter, and Rupert is an unusually open-minded and agreeable Trump supporter, you can scale them up to have Dannybots and Rupertbots talk to millions of conflicted people while preserving privacy, helping assure people they aren’t judged by a real human, etc.
This feels like it could easily be counterproductive.
A chatbot’s “relatable backstory” is generative fiction, and the default “Trump supporter” or “liberal voter” is going to be a vector of online commentary most strongly associated with Trumpiness or liberalism (which tends not to be the most nuanced...), with every single stereotyped talking point trotted out to contradict you. Yes, this can be tweaked, but the tweaking is just toning it down or adding further stereotypes, not creating an actual person.
Whereas the default person that doesn’t agree with your politics is an actual human being, with actual life experience that has influenced their views, probably doesn’t actually hold the views that strongly or agree with literally every argument cited in favour of $cause, is probably capable of changing the subject and becoming likeable again, and hey, you might even be able to change their mind.
So if you’re talking to the first option rather than the second, you’re actually understanding less.
I don’t think it helps matters for people to try to empathise with (say) a few tens of millions of people who voted for the other side—in many cases because they didn’t really pay a lot of attention to politics and had one particularly big concern—by getting them to talk to a robot trained on the other side’s talking points. If you just want to understand the talking points, I guess ChatGPT is a (heavily filtered for inoffensiveness) starting point, or there’s a lot of political material with varying degrees of nuance already out there on the internet written by actual humans...
One possible way to get most of the benefits of talking to a real human being while getting around the costs that salius mentions is to have real humans serve as templates for an AI chatbot to train on.
You might imagine a single person per “archetype” to start with. That way if Danny is an unusually open-minded and agreeable Harris supporter, and Rupert is an unusually open-minded and agreeable Trump supporter, you can scale them up to have Dannybots and Rupertbots talk to millions of conflicted people while preserving privacy, helping assure people they aren’t judged by a real human, etc.