I’ve found it useful both for posts and for considering research and evaluations of research for Unjournal, with some limitations of course.
- The interface can be a little bit overwhelming as it reports so many different outputs at the same time some overlapping
+ but I expect it’s already pretty usable and I expect this to improve.
+ it’s an agent-based approach so as LLM models improve you can swap in the new ones.
I’d love to see some experiments with directly integrating this into the EA forum or LessWrong in some ways, e.g. automatically doing some checks on posts or on drafts or even on comments. Or perhaps an opt-in to that. It could be a step towards systematic ways of improving the dialogue on this forum—and forums and social media in general, perhaps. This could also provide some good opportunity for human feedback that could improve the model, e.g. people could upvote or downvote the roast my post assessments, etc.
I’ve found it useful both for posts and for considering research and evaluations of research for Unjournal, with some limitations of course.
- The interface can be a little bit overwhelming as it reports so many different outputs at the same time some overlapping
+ but I expect it’s already pretty usable and I expect this to improve.
+ it’s an agent-based approach so as LLM models improve you can swap in the new ones.
I’d love to see some experiments with directly integrating this into the EA forum or LessWrong in some ways, e.g. automatically doing some checks on posts or on drafts or even on comments. Or perhaps an opt-in to that. It could be a step towards systematic ways of improving the dialogue on this forum—and forums and social media in general, perhaps. This could also provide some good opportunity for human feedback that could improve the model, e.g. people could upvote or downvote the roast my post assessments, etc.