Thanks for this Rumtin! Let me know if you want to start an associated newsletter to popularise this database and update people about new additions. If so I can share the process and template for the EA Behavioral Science newsletter to build off.
Creating more work for me Peter! ;) No, it’s a great idea—I definitely want to promote updates to both the research database and policy idea database. If can share the materials you have, I would appreciate it!
I love making work for people :) Ok, great! I think that the best way for me to do this is to write up a short forum post explaining the process and linking to the various documents. Let me know if you disagree. If not, I’ll try to do it in the next week or two and email you in case you miss it.
I’m trying to figure out how to best find some examples from this.
Focusing on social science, economics, and impact evaluation (without digging too deeply into technical microbiology or technical AI, etc.)
Looking for work aiming at academic standards of rigor; but it doesn’t need to be in a peer-reviewed publication (perhaps ideally it has not yet been peer reviewed but it’s aiming at it)
Should I filter on “type=Academic paper”? Will this include working papers not yet in a journal? Also I see some things under ‘report’ that seem academic
Looking for work with direct relevance for choices by EA funders… or directly relevant to crucial considerations; should I filter on “policy relevance = High”? Or maybe include “Medium”?
I’m particularly interested in work that:
A. Makes empirical claims, analyses data, runs experiments/trials, runs simulations, fermi-estimations, and calibrations based on real data points
B. (Slightly less so): Makes logical mathematical arguments (typically in economics) for things like mechanisms to boost public goods provision in particular practicable contexts.
Do you have any suggestions on how to best use your Airtable? Any works that stand out? Maybe worth adding a column in your Airtable to flag relevance to the Unjournal?
Hi David. My sense is that the database is currently too simple to be used for your purposes. I have to give it some more thought about what can be added to help researchers differentiate between the publications. Risk category and policy relevance were the two most immediately relevant for my purposes, but I’m sure there are others worth including.
The Academic Paper filter does include working papers—maybe something I should delineate going forward. Many of the Reports are from academic institutions, but they’re policy reports or technical reports as opposed to for an academic journal.
From memory, few publications were highly quantitative or mathematical. If I were to look to add a coloumn based on this, what would be the best criteria. Theoretical vs empirical?
Thanks. Your answers are very helpful!
My skim also suggested that there was a lot that would be hard for academic economists and other people in the general area to evaluate. (but some of it does seem workable and I’m adding it to my list.)
One of the challenges was that a lot of the work makes or considers multiple claims, and seems to give semi rigourous common sense anecdotal and case study based evidence for these claims. Other work involves areas of expertise that we are not targeting, some “hard“ expertise in the natural and physical sciences or legal scholarship, and some in perhaps “after“ areas of non-technical policy analysis and management. (Of course this is my impression based on very quick scams, so I’m probably getting some things wrong.)
Your suggestion to divide this up into “empirical” versus “theoretical” does certainly seem useful. If you do that I think it would help. I’m just trying to think whether there is an even better way to think about breaking it up.
I guess another part of my conception for the unjournal, or at least for the current appeal was to find some pieces of research that really made specific substantial claims that could be Consider in more detail… Considering their methodology, the appropriateness of the data, the mathematical arguments, et cetera. When the papers take on very broad issues and settle issues, this is a bit harder to do. Of course I recognise that the “broad and shallow review” is very important, but it is perhaps harder to assess the rigour of things like this.
Thanks Rumtin for this, it’s a fantastic resource. One thing I note though is that some of the author listings are out of order (this is actually a problem in Terra’s CSVs too where I think maybe some of the content in your database is imported from). For example, item 70 by ‘Tang’ (who is indeed an author) is actually first-authored by ‘Wagman’ as per the link. I had this problem using Terra, where I kept thinking I was finding papers I’d previously missed, only to discover they were the same paper but with authors in a different order. Maybe at some point a verification/QC process could be implemented (in both these databases, Terra too, to clean them up a little). Great work!
Thanks! BTW, I found that some my x-risks related articles are included while other are not. I don’t think that it is because not-included articles are more off-topic, so your search algorithm may fail to find them.
Examples of my published relevant articles which were not included:
Thanks for this Rumtin! Let me know if you want to start an associated newsletter to popularise this database and update people about new additions. If so I can share the process and template for the EA Behavioral Science newsletter to build off.
Creating more work for me Peter! ;) No, it’s a great idea—I definitely want to promote updates to both the research database and policy idea database. If can share the materials you have, I would appreciate it!
I have written something up here. Thanks for your patience!
I love making work for people :) Ok, great! I think that the best way for me to do this is to write up a short forum post explaining the process and linking to the various documents. Let me know if you disagree. If not, I’ll try to do it in the next week or two and email you in case you miss it.
Updates would be fantastic.
Thank you. This looks helpful as a source of pivotal empirical work as starting examples for the Unjournal.
I’m trying to figure out how to best find some examples from this.
Focusing on social science, economics, and impact evaluation (without digging too deeply into technical microbiology or technical AI, etc.)
Looking for work aiming at academic standards of rigor; but it doesn’t need to be in a peer-reviewed publication (perhaps ideally it has not yet been peer reviewed but it’s aiming at it)
Should I filter on “type=Academic paper”? Will this include working papers not yet in a journal? Also I see some things under ‘report’ that seem academic
Looking for work with direct relevance for choices by EA funders… or directly relevant to crucial considerations; should I filter on “policy relevance = High”? Or maybe include “Medium”?
I’m particularly interested in work that:
A. Makes empirical claims, analyses data, runs experiments/trials, runs simulations, fermi-estimations, and calibrations based on real data points
B. (Slightly less so): Makes logical mathematical arguments (typically in economics) for things like mechanisms to boost public goods provision in particular practicable contexts.
Do you have any suggestions on how to best use your Airtable? Any works that stand out? Maybe worth adding a column in your Airtable to flag relevance to the Unjournal?
Hi David. My sense is that the database is currently too simple to be used for your purposes. I have to give it some more thought about what can be added to help researchers differentiate between the publications. Risk category and policy relevance were the two most immediately relevant for my purposes, but I’m sure there are others worth including.
The Academic Paper filter does include working papers—maybe something I should delineate going forward. Many of the Reports are from academic institutions, but they’re policy reports or technical reports as opposed to for an academic journal.
From memory, few publications were highly quantitative or mathematical. If I were to look to add a coloumn based on this, what would be the best criteria. Theoretical vs empirical?
Thanks. Your answers are very helpful! My skim also suggested that there was a lot that would be hard for academic economists and other people in the general area to evaluate. (but some of it does seem workable and I’m adding it to my list.)
One of the challenges was that a lot of the work makes or considers multiple claims, and seems to give semi rigourous common sense anecdotal and case study based evidence for these claims. Other work involves areas of expertise that we are not targeting, some “hard“ expertise in the natural and physical sciences or legal scholarship, and some in perhaps “after“ areas of non-technical policy analysis and management. (Of course this is my impression based on very quick scams, so I’m probably getting some things wrong.)
Your suggestion to divide this up into “empirical” versus “theoretical” does certainly seem useful. If you do that I think it would help. I’m just trying to think whether there is an even better way to think about breaking it up.
I guess another part of my conception for the unjournal, or at least for the current appeal was to find some pieces of research that really made specific substantial claims that could be Consider in more detail… Considering their methodology, the appropriateness of the data, the mathematical arguments, et cetera. When the papers take on very broad issues and settle issues, this is a bit harder to do. Of course I recognise that the “broad and shallow review” is very important, but it is perhaps harder to assess the rigour of things like this.
Thanks Rumtin for this, it’s a fantastic resource. One thing I note though is that some of the author listings are out of order (this is actually a problem in Terra’s CSVs too where I think maybe some of the content in your database is imported from). For example, item 70 by ‘Tang’ (who is indeed an author) is actually first-authored by ‘Wagman’ as per the link. I had this problem using Terra, where I kept thinking I was finding papers I’d previously missed, only to discover they were the same paper but with authors in a different order. Maybe at some point a verification/QC process could be implemented (in both these databases, Terra too, to clean them up a little). Great work!
Thanks! BTW, I found that some my x-risks related articles are included while other are not. I don’t think that it is because not-included articles are more off-topic, so your search algorithm may fail to find them.
Examples of my published relevant articles which were not included:
The Global Catastrophic Risks Connected with Possibility of Finding Alien AI During SETI
Islands as refuges for surviving global catastrophes
Surviving global risks through the preservation of humanity’s data on the Moon
Aquatic refuges for surviving a global catastrophe