Post status: First-pass, looking for feedback, aiming to build and share something more polished and comprehensive
The Unjournal: background, progress, push for communications and impact
The Unjournal is a nonprofit that publicly evaluates and rates research, focusing on impact.
We now have about 30 “evaluation packages” posted (here, indexing in scholarly ecosystem). Each package links the (open-access) papers, and contains 1-3 expert evaluations and ratings of this work, as well as a synthesis and the evaluation manager’s report. Some also have author responses. We’re working to make this content more visible and more useful, including through accessible public summaries.
We’re considering/building another tool: a notebook/chatbot that will enable you to ask questions about the research and the evaluations. We’re trialing a few approaches (such as engineering with anythingLLM etc.), and wanted to get early thoughts and opinions.
NotebookLM seems particularly easy to set up, and seems to be yielding some positive results. This tool got some eyeballs in academic social media for its AI-generated podcasts of research (aimed at a lay audience, cheerleading, some inaccuracies; e.g., see here and here, with caution), but its notebook chat feature seems more useful for us.
We can upload or scrape a range of sources, including the research paper itself, the evaluations, responses, and syntheses, and even related podcast transcripts and blogs.
It seems to give fairly useful and accurate answers. It sometimes mixes up things like ‘what we suggest for future evaluators to do’ and ‘what the evaluators actually wrote’. But it tracks and links the sources for each answer, so you can double-check it pretty easily.
Some limitations: The formatting and UX also leave a bit to be desired (e.g., you need to click ‘notebook guide’ a lot). It can be hard to see exactly what the referenced content is going; especially if it comes from a scraped website, the formatting can be weird. I don’t see an easy way to upload or download content in bulk. Saved ‘notes’ lose the links to the references.
For Banerjee et al (“Selecting the Most Effective Nudge: Evidence from a Large-Scale Experiment on Immunization”)
To request access
At the moment you can’t share the notebooks publicly – if you share a non-institutional email we can give you access to specific notebooks, as we create them. To request access Complete the 3-question request form here.
Would like your feedback (thanks!)
We’ve only created a few of these, we could create more without too much effort. But before we dive in, looking for suggestions on:
Are these useful? How could they be made more useful?
Is NotebookLM the best tool here? What else should we consider?
We’re considering pushing this further and investing in a more bespoke, shareable, and automated platform:
Improving Technology and User Experience: We want to build better tools for scholars, policymakers, journalists, and philanthropists to engage with impactful research and The Unjournal’s work. This includes developing interactive LLM tools that allow users to ask questions about the research and evaluation packages, creating a more interactive and accessible experience. We also want to extend this to a larger database of impactful research, providing tools to aggregate and share users’ insights and discussion questions with researchers and beyond.
This may take the form of research/evaluation conversational notebooks, inspired by tools like NotebookLM and Perplexity. These notebooks would be automatically generated from our content (e.g., at unjournal.pubpub.org) and continuously updated. We envision:
Publicly shareable notebooks, also enabling users to share their notebook outputs
One notebook for each evaluation package, as well as notebooks covering everything in a particular area or related to an identified pivotal question.
A semantic search tool for users to query “what has The Unjournal evaluated?” across our entire database
Embedded explanations of the evaluation context, including The Unjournal’s goals and approach
Clear sourcing and transparent display of sources within both our content and basic web data. Academic citation and linking support
Fine tuning and query engineering to align the explanations with our communication style (and, in particular, to clarifying which points were raised by Unjournal managers versus independent evaluators, versus authors)
We aim beyond this, to
Incorporate a wider set of research (e.g., all the work in our prioritized database)
Leverage users’ queries and conversation (with their permission) to provide feedback to researchers and practitioners on
Frequently-asked-questions and requests for clarification, especially those that were not fully resolved
Queries and comments suggesting doubts and scope for improvement
Ways users are approaching and incorporating the research in their own practice
… and similarly, to provide feedback to evaluators, and feedback that informs our own (Unjournal) approaches
Ultimately, this tool could become a “killer app” for conveying questions and feedback to researchers to help them improve and extend their work. In the long term, we believe these efforts could contribute to building future automated literature review and evaluation tools, related to the work of Elicit.org, Scite, and Research Rabbit.
We will support open-source, adaptable software. We expect these tools to be useful to other aligned orgs (e.g., to support ‘living literature reviews’).
‘Chat with impactful research & evaluations’ (Unjournal NotebookLMs)
Post status: First-pass, looking for feedback, aiming to build and share something more polished and comprehensive
The Unjournal: background, progress, push for communications and impact
The Unjournal is a nonprofit that publicly evaluates and rates research, focusing on impact.
We now have about 30 “evaluation packages” posted (here, indexing in scholarly ecosystem). Each package links the (open-access) papers, and contains 1-3 expert evaluations and ratings of this work, as well as a synthesis and the evaluation manager’s report. Some also have author responses. We’re working to make this content more visible and more useful, including through accessible public summaries.
Also see:
our Pivotal Questions initiative
our regularly updated ‘research with potential for impact’ database
Notebook/chatbot exploration (esp. NotebookLM)
We’re considering/building another tool: a notebook/chatbot that will enable you to ask questions about the research and the evaluations. We’re trialing a few approaches (such as engineering with anythingLLM etc.), and wanted to get early thoughts and opinions.
NotebookLM seems particularly easy to set up, and seems to be yielding some positive results. This tool got some eyeballs in academic social media for its AI-generated podcasts of research (aimed at a lay audience, cheerleading, some inaccuracies; e.g., see here and here, with caution), but its notebook chat feature seems more useful for us.
We can upload or scrape a range of sources, including the research paper itself, the evaluations, responses, and syntheses, and even related podcast transcripts and blogs.
It seems to give fairly useful and accurate answers. It sometimes mixes up things like ‘what we suggest for future evaluators to do’ and ‘what the evaluators actually wrote’. But it tracks and links the sources for each answer, so you can double-check it pretty easily.
Some limitations: The formatting and UX also leave a bit to be desired (e.g., you need to click ‘notebook guide’ a lot). It can be hard to see exactly what the referenced content is going; especially if it comes from a scraped website, the formatting can be weird. I don’t see an easy way to upload or download content in bulk. Saved ‘notes’ lose the links to the references.
So far we’ve made notebooks
for the aforementioned ‘Lead’ paper,
For our evaluations of “Forecasting Existential Risks: Evidence from a Long-Run Forecasting Tournament” (also incorporating the 80k podcast on this
For Banerjee et al (“Selecting the Most Effective Nudge: Evidence from a Large-Scale Experiment on Immunization”)
To request access
At the moment you can’t share the notebooks publicly – if you share a non-institutional email we can give you access to specific notebooks, as we create them. To request access Complete the 3-question request form here.
Would like your feedback (thanks!)
We’ve only created a few of these, we could create more without too much effort. But before we dive in, looking for suggestions on:
Are these useful? How could they be made more useful?
Is NotebookLM the best tool here? What else should we consider?
How to best automate this process?
Any risks we might not be anticipating?