Somewhat related may also be this recent paper by Costello and colleagues who found that engaging in a dialogue with GPT-4 stably decreased conspiracy beliefs (HT Lucius).
Perhaps social scientists can help with research on how to best design LLMs to improve people’s epistemics; or to make sure that interacting with LLMs at least doesn’t worsen people’s epistemics.
I’m excited about work in this area.
Somewhat related may also be this recent paper by Costello and colleagues who found that engaging in a dialogue with GPT-4 stably decreased conspiracy beliefs (HT Lucius).
Perhaps social scientists can help with research on how to best design LLMs to improve people’s epistemics; or to make sure that interacting with LLMs at least doesn’t worsen people’s epistemics.