Unjournal AI-assisted research prioritization dashboard (very early prototype)
We’ve been experimenting with using LLMs to help identify and prioritize research for Unjournal evaluation, to work with and complement human prioritization (and learn). We now have a public prototype dashboard:
What it does: Automatically discovers recent papers from NBER, arXiv (econ), CEPR, SSRN, Semantic Scholar, EA Forum paper links, and OpenAlex, then scores them using AI models (GPT-5.4 family) against our prioritization criteria — decision relevance, prominence, timing value, and methodological potential.
Important caveats:
This is very preliminary and the AI recommendations are not yet well-calibrated. Many of the suggestions are mediocre
we’re sharing it for transparency and feedback, not because it’s producing great output yet.
I’ve done something similar for ranking clinical medicine articles, it’s pretty similar to your site but might be able to share some insights. (might comment more later regardless, just throwing this up for now so I remember).
edit: also signing up will auto subscribe you to emails just to note but should be easy to unsubscribe, can also see how we do rankings without signing up on the landing page.
Unjournal AI-assisted research prioritization dashboard (very early prototype)
We’ve been experimenting with using LLMs to help identify and prioritize research for Unjournal evaluation, to work with and complement human prioritization (and learn). We now have a public prototype dashboard:
uj-prioritization-dashboard.netlify.app
What it does: Automatically discovers recent papers from NBER, arXiv (econ), CEPR, SSRN, Semantic Scholar, EA Forum paper links, and OpenAlex, then scores them using AI models (GPT-5.4 family) against our prioritization criteria — decision relevance, prominence, timing value, and methodological potential.
Important caveats:
This is very preliminary and the AI recommendations are not yet well-calibrated. Many of the suggestions are mediocre we’re sharing it for transparency and feedback, not because it’s producing great output yet.
This is supplementary to our existing Public Database of Prioritized Research on Coda (https://coda.io/d/Unjournal-Public-Pages_ddIEzDONWdb/Public-Database-of-Prioritized-Research_sutD341G#_luToq6IH
Scores reflect evaluation priority (expected value of commissioning an independent review), not research quality.
ATM The AI only sees paper metadata and abstracts, not full texts.
There’s also a statistics page showing the breakdown by source, cause area, and score distribution.
Feedback welcome. You can also comment directly on the page via Hypothes.is, and we’ll adapt
Very cool.
I work on text parsing / meta science and do a lot of stuff like this on the side and for my lab.
https://docgmedicalsummaries.com/rankings
I’ve done something similar for ranking clinical medicine articles, it’s pretty similar to your site but might be able to share some insights. (might comment more later regardless, just throwing this up for now so I remember).
edit: also signing up will auto subscribe you to emails just to note but should be easy to unsubscribe, can also see how we do rankings without signing up on the landing page.
Thanks, I’d be up for hearing insights. This is related to a larger project (see https://llm-uj-research-eval.netlify.app/), but this part of it is still pretty early stage.
Will DM