I’m trying to figure out how to best find some examples from this.
Focusing on social science, economics, and impact evaluation (without digging too deeply into technical microbiology or technical AI, etc.)
Looking for work aiming at academic standards of rigor; but it doesn’t need to be in a peer-reviewed publication (perhaps ideally it has not yet been peer reviewed but it’s aiming at it)
Should I filter on “type=Academic paper”? Will this include working papers not yet in a journal? Also I see some things under ‘report’ that seem academic
Looking for work with direct relevance for choices by EA funders… or directly relevant to crucial considerations; should I filter on “policy relevance = High”? Or maybe include “Medium”?
I’m particularly interested in work that:
A. Makes empirical claims, analyses data, runs experiments/trials, runs simulations, fermi-estimations, and calibrations based on real data points
B. (Slightly less so): Makes logical mathematical arguments (typically in economics) for things like mechanisms to boost public goods provision in particular practicable contexts.
Do you have any suggestions on how to best use your Airtable? Any works that stand out? Maybe worth adding a column in your Airtable to flag relevance to the Unjournal?
Hi David. My sense is that the database is currently too simple to be used for your purposes. I have to give it some more thought about what can be added to help researchers differentiate between the publications. Risk category and policy relevance were the two most immediately relevant for my purposes, but I’m sure there are others worth including.
The Academic Paper filter does include working papers—maybe something I should delineate going forward. Many of the Reports are from academic institutions, but they’re policy reports or technical reports as opposed to for an academic journal.
From memory, few publications were highly quantitative or mathematical. If I were to look to add a coloumn based on this, what would be the best criteria. Theoretical vs empirical?
Thanks. Your answers are very helpful!
My skim also suggested that there was a lot that would be hard for academic economists and other people in the general area to evaluate. (but some of it does seem workable and I’m adding it to my list.)
One of the challenges was that a lot of the work makes or considers multiple claims, and seems to give semi rigourous common sense anecdotal and case study based evidence for these claims. Other work involves areas of expertise that we are not targeting, some “hard“ expertise in the natural and physical sciences or legal scholarship, and some in perhaps “after“ areas of non-technical policy analysis and management. (Of course this is my impression based on very quick scams, so I’m probably getting some things wrong.)
Your suggestion to divide this up into “empirical” versus “theoretical” does certainly seem useful. If you do that I think it would help. I’m just trying to think whether there is an even better way to think about breaking it up.
I guess another part of my conception for the unjournal, or at least for the current appeal was to find some pieces of research that really made specific substantial claims that could be Consider in more detail… Considering their methodology, the appropriateness of the data, the mathematical arguments, et cetera. When the papers take on very broad issues and settle issues, this is a bit harder to do. Of course I recognise that the “broad and shallow review” is very important, but it is perhaps harder to assess the rigour of things like this.
Thank you. This looks helpful as a source of pivotal empirical work as starting examples for the Unjournal.
I’m trying to figure out how to best find some examples from this.
Focusing on social science, economics, and impact evaluation (without digging too deeply into technical microbiology or technical AI, etc.)
Looking for work aiming at academic standards of rigor; but it doesn’t need to be in a peer-reviewed publication (perhaps ideally it has not yet been peer reviewed but it’s aiming at it)
Should I filter on “type=Academic paper”? Will this include working papers not yet in a journal? Also I see some things under ‘report’ that seem academic
Looking for work with direct relevance for choices by EA funders… or directly relevant to crucial considerations; should I filter on “policy relevance = High”? Or maybe include “Medium”?
I’m particularly interested in work that:
A. Makes empirical claims, analyses data, runs experiments/trials, runs simulations, fermi-estimations, and calibrations based on real data points
B. (Slightly less so): Makes logical mathematical arguments (typically in economics) for things like mechanisms to boost public goods provision in particular practicable contexts.
Do you have any suggestions on how to best use your Airtable? Any works that stand out? Maybe worth adding a column in your Airtable to flag relevance to the Unjournal?
Hi David. My sense is that the database is currently too simple to be used for your purposes. I have to give it some more thought about what can be added to help researchers differentiate between the publications. Risk category and policy relevance were the two most immediately relevant for my purposes, but I’m sure there are others worth including.
The Academic Paper filter does include working papers—maybe something I should delineate going forward. Many of the Reports are from academic institutions, but they’re policy reports or technical reports as opposed to for an academic journal.
From memory, few publications were highly quantitative or mathematical. If I were to look to add a coloumn based on this, what would be the best criteria. Theoretical vs empirical?
Thanks. Your answers are very helpful! My skim also suggested that there was a lot that would be hard for academic economists and other people in the general area to evaluate. (but some of it does seem workable and I’m adding it to my list.)
One of the challenges was that a lot of the work makes or considers multiple claims, and seems to give semi rigourous common sense anecdotal and case study based evidence for these claims. Other work involves areas of expertise that we are not targeting, some “hard“ expertise in the natural and physical sciences or legal scholarship, and some in perhaps “after“ areas of non-technical policy analysis and management. (Of course this is my impression based on very quick scams, so I’m probably getting some things wrong.)
Your suggestion to divide this up into “empirical” versus “theoretical” does certainly seem useful. If you do that I think it would help. I’m just trying to think whether there is an even better way to think about breaking it up.
I guess another part of my conception for the unjournal, or at least for the current appeal was to find some pieces of research that really made specific substantial claims that could be Consider in more detail… Considering their methodology, the appropriateness of the data, the mathematical arguments, et cetera. When the papers take on very broad issues and settle issues, this is a bit harder to do. Of course I recognise that the “broad and shallow review” is very important, but it is perhaps harder to assess the rigour of things like this.