Somewhat relatedly, what about using AI to improve not your own (or your project’s) epistemics, but improve public discourse? Something like “improve news” or “improve where people get their info on controversial topics.”
Edit: To give more context, I was picturing something like training LLMs to pass ideological turing tests and then create a summary of the strongest arguments for and against, as well as takedowns of common arguments by each side that are clearly bad.
And maybe combine that with commenting on current events as they unfold (to gain traction), handling the tough balance of having to compete in the attention landscape while still adhering to high epistemic standards. The goal then being something like “trusted source of balanced reporting,” which you can later direct to issues that matter the most (after gaining traction earlier by discussing all sorts of things).
Somewhat relatedly, what about using AI to improve not your own (or your project’s) epistemics, but improve public discourse? Something like “improve news” or “improve where people get their info on controversial topics.”
Edit: To give more context, I was picturing something like training LLMs to pass ideological turing tests and then create a summary of the strongest arguments for and against, as well as takedowns of common arguments by each side that are clearly bad. And maybe combine that with commenting on current events as they unfold (to gain traction), handling the tough balance of having to compete in the attention landscape while still adhering to high epistemic standards. The goal then being something like “trusted source of balanced reporting,” which you can later direct to issues that matter the most (after gaining traction earlier by discussing all sorts of things).