We’re considering pushing this further and investing in a more bespoke, shareable, and automated platform:
Improving Technology and User Experience: We want to build better tools for scholars, policymakers, journalists, and philanthropists to engage with impactful research and The Unjournal’s work. This includes developing interactive LLM tools that allow users to ask questions about the research and evaluation packages, creating a more interactive and accessible experience. We also want to extend this to a larger database of impactful research, providing tools to aggregate and share users’ insights and discussion questions with researchers and beyond.
This may take the form of research/evaluation conversational notebooks, inspired by tools like NotebookLM and Perplexity. These notebooks would be automatically generated from our content (e.g., at unjournal.pubpub.org) and continuously updated. We envision:
Publicly shareable notebooks, also enabling users to share their notebook outputs
One notebook for each evaluation package, as well as notebooks covering everything in a particular area or related to an identified pivotal question.
A semantic search tool for users to query “what has The Unjournal evaluated?” across our entire database
Embedded explanations of the evaluation context, including The Unjournal’s goals and approach
Clear sourcing and transparent display of sources within both our content and basic web data. Academic citation and linking support
Fine tuning and query engineering to align the explanations with our communication style (and, in particular, to clarifying which points were raised by Unjournal managers versus independent evaluators, versus authors)
We aim beyond this, to
Incorporate a wider set of research (e.g., all the work in our prioritized database)
Leverage users’ queries and conversation (with their permission) to provide feedback to researchers and practitioners on
Frequently-asked-questions and requests for clarification, especially those that were not fully resolved
Queries and comments suggesting doubts and scope for improvement
Ways users are approaching and incorporating the research in their own practice
… and similarly, to provide feedback to evaluators, and feedback that informs our own (Unjournal) approaches
Ultimately, this tool could become a “killer app” for conveying questions and feedback to researchers to help them improve and extend their work. In the long term, we believe these efforts could contribute to building future automated literature review and evaluation tools, related to the work of Elicit.org, Scite, and Research Rabbit.
We will support open-source, adaptable software. We expect these tools to be useful to other aligned orgs (e.g., to support ‘living literature reviews’).
david_reinstein comments on ‘Chat with impactful research & evaluations’ (Unjournal NotebookLMs)