Update 28 Apr 2022 -- Only ~10 responses so far, many of which are general areas rather than specific papers/findings/projects. So the ‘bounty expected return’ is still high.
Below, I give some work that I thought might be especially relevant, as examples.
Update 5 Jul 2022: The bounty prizes have been awarded. Ross Tieman (ALLFED) made a suggestion we are piloting ($250 prize), and 3 other people made eligible suggestions; we drew one randomly, awarding $250.
Please continue to make suggestions though; still very valuable, and we aim to award retroactive prizes too.
What papers, findings, projects, or pieces of research (academic or non-academic) would you most like to see carefully and rigorously evaluated?[2]
Which specific results do you rely on in making key decisions, or which ones do you think that large EA-aligned donors and organizations rely on heavily?
Research projects the Unjournal should consider in its first year
I plan to offer a bounty on successful suggestions for this, and entries will also be eligible for future bounties.
Also: Great use cases for the https://redteammarket.com/ (tied to the eminent Daniel Lakens); this initiative looks very promising to me. ^[But please note that the bounty will only relate to the Unjournal.)
I’m looking especially for:
specific papers and findings in these papers and projects that we lean on a lot
empirical work that would benefit from an open-science/replicability assessment or an assessment of the methodology
More on what I am looking for
Some combination of work …
Focusing on social science, economics, and impact evaluation (without digging too deeply into technical microbiology or technical AI, etc.)
Aiming at academic standards of rigor; perhaps it is in the process of peer review or aiming at it
With direct relevance for choices by EA funders… or to crucial considerations
“Empirical and/or quantitative”: Makes empirical claims, analyses data, runs experiments/trials, runs simulations, fermi-estimations, and calibrations based on real data points
“Applied/applicable (economics/decision science) theory”: Makes logical mathematical arguments (typically in economics) for things like mechanisms to boost public goods provision in particular contexts.
Unjournal would be particularly good for
Work in replicable dynamic documents (Rmarkdown, Quarto, Jupyter, etc)
… or otherwise hard to fit into a ‘frozen pdf’
Ongoing projects you will ‘continue to build on in the same place’
Work that is hard-to-place in standard journals …
… because it is interdisciplinary (but rigorous) and requires multiple dimensions of expertise to evaluate
or because it is more impactful and robust than it is ‘novel’
.(And thus we particularly value suggestions for work like this.)
The Bounty
Full details, T&C, and other considerations are here; excerpted below.
The prizes are:
1. “Piloted suggestion prize”: $250 x 1-3 … prizes for each of the 1-3 suggested research projects that we choose as piloting/proof-of-concept examples
2. “Participation prize”: A $150-$300 prize (see below)
Drawn randomly among …
All people (other than those winning prize 1), who submit suggestions, where these suggestions are in a format we can consider. They must link a piece of research or a project and giving at least 1 sentence justifying it’s relevance.
Anyone who sends a posted letter (see footnote)[3]
If we use two or more suggestions for piloting/proof-of-concept the Participation prize will be $150.
If we use only one of the suggestions for piloting/proof-of-concept the Participation prize will be $250.
If we do not use any of the suggestions for piloting/proof-of-concept the Participation prize will be $300.
3. (Potential) Additional prize qualification: We intend to have a general bounty for suggesting projects and papers that are assessed through the unjournal. We will decide this after/during the unjournal piloting process. Even if your suggestion is not chosen for the piloting example, if we later choose to have it assessed through the Unjournal, you will be eligible for any associated bounty.
Timeline for bounty
We announced this bounty on 22 April 2022. We intend to choose the pilot projects within one month. I will impose a ‘final resolution’ date of July 4, 2022, at the latest (but I hope to resolve this earlier). As noted, the ‘additional prize’ bounties may carry forward after this date.
The Airtable form is the most helpful way to respond, the best way to be sure you are getting at what we are asking, and the easiest way to be eligible for the prize, and to make sure we have your contact info.
However, we appreciate all forms of response. Even if you fill out the form, can be helpful to also post your suggestion below to foster conversations.
You can submit as many entries as you like.
Recognition and anonymity
We intend to publicly recognize all suggestions that we use unless you say you want to remain anonymous. If you want to remain anonymous even to us (the unjournal organizers), you can submit the Airtable form without leaving any contact information, but then you will not be eligible for the bounty prize (as we cannot contact you.)
Some suggested ‘sort of things we might be looking for’
0. Eva Vivalt, 2020, “How Much Can We Generalize From Impact Evaluations?”, JEEA
Firstly, I suspect this paper has been rather thoroughly assessed, and it is published in a respected journal.[4] So I’m not saying this actl paper, at least not for our early stages, but papers and projects like this.
Why:
Impact evaluations drive GiveWell, Open Philanthropy, and government aid organizations’ recommendations and actions in the global health space (and beyond). This is obviously core to EA.
This is a serious, methodologically rigorous, effortful, and well-documented quantitative assessment of how well the insights from these studies generalize.
The meta-analytic methods used are also highly relevant for EA research organizations
The work is empirical, and all code and data is shared. But the journal publication formats do not make replication as easy as it should be.
The results themselves (e.g., figure 3, table 4, the regression tables) could also be better presented in a dynamic document, allowing users to filter, zoom in, choose what to look at, etc. (E.g., the Plotly tools, all the great stuff we see at our world in data).
Users/readers of this research also have a number of value-based and empirical judgments to make, and could derive all sorts of useful personalized recommendations. This is enabled by dynamic formats; in fact, this is what the author’s organization/site “AidGrade” works to do.[5]
A large part of the paper is essentially a reprise and tutorial on Bayesian meta-analysis and hierarchical models. In a dynamic format this could be presented in a much more ‘teachable way’, allowing expanding boxes, out-links and hover-overs, animations, etc.
1. Kremer et al, “Advance Market Commitments”
Kremer, M., Levin, J. and Snyder, C.M., 2020, May. Advance Market Commitments: Insights from Theory and Experience. In AEA Papers and Proceedings (Vol. 110, pp. 269-73). One-click link here
Why this paper?
Advance market commitments for vaccines and drugs seem highly relevant to both global health and development priorities, and to reducing catastrophic and/or existential risk from future pandemics. This is also a practicable policy.
The authors make specific empirical claims based in specific calculations that could be critically assessed[6], as well as a specific formal economic model of maximization, with empirical implications.
The authors are well-respected in their field (obviously, this includes a Nobel laureate), but the paper may not have been as carefully reviewed and assessed as it could have been. “AEA Papers and Proceedings” does go through some selection and scrutiny but is not peer-reviewed in the same way that papers in journals like the American Economic Review are.
The authors stand strongly behind their work and are eager to promote its impact; e.g., see this NY Times Op-ed including many of the authors
The calibration model and some other parts of the explanation might be better suited to interactive and linked formats, rather than pdfs, to get the maximum value (but this is not necessary)
2. Aghion, P., Jones, B.F. and Jones, C.I., 2017. Artificial Intelligence and Economic Growth. NBER working paper.
The work seems (and is cited in David Rhys-Bernard’s syllabus) as being relevant to longtermism. Longtermist EAs are particularly interested in economic growth but concerned about AI risk, so this may weigh on an important tradeoff.
While the paper uses macroeconomic growth and production theory, as well as simulation and calibration (only some broad-brush real-world data in a later section …
it makes specific policy-relevant claims and uses standard tools of economics
the setup, structure and implications and interpretations of these models could be carefully reviewed
The papers’ authors are prominent (even being an NBER member is rather selective), but after about 5-6 years the paper is still not published in a peer-reviewed journal
This need not be because of weaknesses in the research; the authors may have abandoned the process for career/strategic reasons, or it may be seen as ‘not innovative enough’ or ‘not interesting enough’ (whatever that means) by the reviewers in the top economics journals .
… that does not imply that the research is not relevant and interesting for EA; I think it is
Obvious relevance for animal-welfare interventions and charities
Empirical (national discrete choice experiment/survey). Jason Lusk’s work is often cited and recommended.
Yes, it is published, but ‘Food Policy’ seems like a rather specific field journal, perhaps. This may not have been given the careful feedback and assessment it deserves, because it may have been seen as a niche issue by mainstream economists.
AllFed is one of the most concrete intervention and organization associated with extinction risk and longtermism.
The work is based on Monte-Carlo Fermi estimation, I believe. The authors are engineers; this could probably use feedback from an economist, policy analyst, or business academic.
I don’t want the Unjournal to take on technical AI issues yet, but this is about ‘aggregating uncertainty’ from experts on a crucial issue, and it seems in the quant/econ/social science wheelhouse.
Why: Quadratic voting and schemes like these are often cited/advocated as game-changers, including in EA and rationalist circles, I think. The authors are fairly prominent, but the paper doesn’t seem to have ‘made it through peer review’. Maybe it is seen too much as advocacy?[7]
Update/correction: The paper was published in Management Science under another title
So does it make sense, what are counter-arguments and vulnerabilities, has/does it work in practice?
Update: The authors reached out to me. Gitcoin has done some experimentation on this mechanism, and there may be future work to consider and evaluate.
But the interactive site and tool cannot be ‘peer reviewed’ in a standard way, and can’t easily be given career rewards … that’s part of where the Unjournal comes in, hopefully.
“Appendix A provides details behind calculations showing that if PCV coverage in GAVI countries converged to the global rate at the slower rate of the rotavirus vaccine in Figure 2, 67 million fewer children under age 1 would have been immunized, amounting to a loss of over 12 million DALYs.
[Question] What “pivotal” and useful research … would you like to see assessed? (Bounty for suggestions)
Update 28 Apr 2022 -- Only ~10 responses so far, many of which are general areas rather than specific papers/findings/projects. So the ‘bounty expected return’ is still high.
Below, I give some work that I thought might be especially relevant, as examples.
Update 22 Apr 2022: Bounty[1]
Update 5 Jul 2022: The bounty prizes have been awarded. Ross Tieman (ALLFED) made a suggestion we are piloting ($250 prize), and 3 other people made eligible suggestions; we drew one randomly, awarding $250.
Please continue to make suggestions though; still very valuable, and we aim to award retroactive prizes too.
What papers, findings, projects, or pieces of research (academic or non-academic) would you most like to see carefully and rigorously evaluated?[2]
Which specific results do you rely on in making key decisions, or which ones do you think that large EA-aligned donors and organizations rely on heavily?
I’m considering this particularly as...
Demonstration and test-cases for the Unjournal in our first months, as well as our general agenda (see the previous request and my earlier post
We have put a bounty on this, explained below
Research projects the Unjournal should consider in its first year
I plan to offer a bounty on successful suggestions for this, and entries will also be eligible for future bounties.
Also: Great use cases for the https://redteammarket.com/ (tied to the eminent Daniel Lakens); this initiative looks very promising to me. ^[But please note that the bounty will only relate to the Unjournal.)
I’m looking especially for:
specific papers and findings in these papers and projects that we lean on a lot
empirical work that would benefit from an open-science/replicability assessment or an assessment of the methodology
More on what I am looking for
Some combination of work …
Focusing on social science, economics, and impact evaluation (without digging too deeply into technical microbiology or technical AI, etc.)
Aiming at academic standards of rigor; perhaps it is in the process of peer review or aiming at it
With direct relevance for choices by EA funders… or to crucial considerations
“Empirical and/or quantitative”: Makes empirical claims, analyses data, runs experiments/trials, runs simulations, fermi-estimations, and calibrations based on real data points
“Applied/applicable (economics/decision science) theory”: Makes logical mathematical arguments (typically in economics) for things like mechanisms to boost public goods provision in particular contexts.
Unjournal would be particularly good for
Work in replicable dynamic documents (Rmarkdown, Quarto, Jupyter, etc)
… or otherwise hard to fit into a ‘frozen pdf’
Ongoing projects you will ‘continue to build on in the same place’
Work that is hard-to-place in standard journals …
… because it is interdisciplinary (but rigorous) and requires multiple dimensions of expertise to evaluate
or because it is more impactful and robust than it is ‘novel’
.(And thus we particularly value suggestions for work like this.)
The Bounty
Full details, T&C, and other considerations are here; excerpted below.
The prizes are:
1. “Piloted suggestion prize”: $250 x 1-3 … prizes for each of the 1-3 suggested research projects that we choose as piloting/proof-of-concept examples
2. “Participation prize”: A $150-$300 prize (see below)
Drawn randomly among …
All people (other than those winning prize 1), who submit suggestions, where these suggestions are in a format we can consider. They must link a piece of research or a project and giving at least 1 sentence justifying it’s relevance.
Anyone who sends a posted letter (see footnote)[3]
If we use two or more suggestions for piloting/proof-of-concept the Participation prize will be $150. If we use only one of the suggestions for piloting/proof-of-concept the Participation prize will be $250. If we do not use any of the suggestions for piloting/proof-of-concept the Participation prize will be $300.
3. (Potential) Additional prize qualification: We intend to have a general bounty for suggesting projects and papers that are assessed through the unjournal. We will decide this after/during the unjournal piloting process. Even if your suggestion is not chosen for the piloting example, if we later choose to have it assessed through the Unjournal, you will be eligible for any associated bounty.
Timeline for bounty
We announced this bounty on 22 April 2022. We intend to choose the pilot projects within one month. I will impose a ‘final resolution’ date of July 4, 2022, at the latest (but I hope to resolve this earlier). As noted, the ‘additional prize’ bounties may carry forward after this date.
Responding
You can:
respond below,
DM me,
or in THIS Airtable form.
The Airtable form is the most helpful way to respond, the best way to be sure you are getting at what we are asking, and the easiest way to be eligible for the prize, and to make sure we have your contact info.
However, we appreciate all forms of response. Even if you fill out the form, can be helpful to also post your suggestion below to foster conversations.
You can submit as many entries as you like.
Recognition and anonymity
We intend to publicly recognize all suggestions that we use unless you say you want to remain anonymous. If you want to remain anonymous even to us (the unjournal organizers), you can submit the Airtable form without leaving any contact information, but then you will not be eligible for the bounty prize (as we cannot contact you.)
Some suggested ‘sort of things we might be looking for’
0. Eva Vivalt, 2020, “How Much Can We Generalize From Impact Evaluations?”, JEEA
One click link
Why (not) this paper?
Firstly, I suspect this paper has been rather thoroughly assessed, and it is published in a respected journal.[4] So I’m not saying this actl paper, at least not for our early stages, but papers and projects like this.
Why:
Impact evaluations drive GiveWell, Open Philanthropy, and government aid organizations’ recommendations and actions in the global health space (and beyond). This is obviously core to EA.
This is a serious, methodologically rigorous, effortful, and well-documented quantitative assessment of how well the insights from these studies generalize.
The meta-analytic methods used are also highly relevant for EA research organizations
The work is empirical, and all code and data is shared. But the journal publication formats do not make replication as easy as it should be.
The results themselves (e.g., figure 3, table 4, the regression tables) could also be better presented in a dynamic document, allowing users to filter, zoom in, choose what to look at, etc. (E.g., the Plotly tools, all the great stuff we see at our world in data).
Users/readers of this research also have a number of value-based and empirical judgments to make, and could derive all sorts of useful personalized recommendations. This is enabled by dynamic formats; in fact, this is what the author’s organization/site “AidGrade” works to do.[5]
A large part of the paper is essentially a reprise and tutorial on Bayesian meta-analysis and hierarchical models. In a dynamic format this could be presented in a much more ‘teachable way’, allowing expanding boxes, out-links and hover-overs, animations, etc.
1. Kremer et al, “Advance Market Commitments”
Kremer, M., Levin, J. and Snyder, C.M., 2020, May. Advance Market Commitments: Insights from Theory and Experience. In AEA Papers and Proceedings (Vol. 110, pp. 269-73). One-click link here
Why this paper?
Advance market commitments for vaccines and drugs seem highly relevant to both global health and development priorities, and to reducing catastrophic and/or existential risk from future pandemics. This is also a practicable policy.
The authors make specific empirical claims based in specific calculations that could be critically assessed[6], as well as a specific formal economic model of maximization, with empirical implications.
The authors are well-respected in their field (obviously, this includes a Nobel laureate), but the paper may not have been as carefully reviewed and assessed as it could have been. “AEA Papers and Proceedings” does go through some selection and scrutiny but is not peer-reviewed in the same way that papers in journals like the American Economic Review are.
The authors stand strongly behind their work and are eager to promote its impact; e.g., see this NY Times Op-ed including many of the authors
The calibration model and some other parts of the explanation might be better suited to interactive and linked formats, rather than pdfs, to get the maximum value (but this is not necessary)
2. Aghion, P., Jones, B.F. and Jones, C.I., 2017. Artificial Intelligence and Economic Growth. NBER working paper.
one-click link
Why this paper?
The work seems (and is cited in David Rhys-Bernard’s syllabus) as being relevant to longtermism. Longtermist EAs are particularly interested in economic growth but concerned about AI risk, so this may weigh on an important tradeoff.
While the paper uses macroeconomic growth and production theory, as well as simulation and calibration (only some broad-brush real-world data in a later section …
it makes specific policy-relevant claims and uses standard tools of economics
the setup, structure and implications and interpretations of these models could be carefully reviewed
The papers’ authors are prominent (even being an NBER member is rather selective), but after about 5-6 years the paper is still not published in a peer-reviewed journal
This need not be because of weaknesses in the research; the authors may have abandoned the process for career/strategic reasons, or it may be seen as ‘not innovative enough’ or ‘not interesting enough’ (whatever that means) by the reviewers in the top economics journals .
… that does not imply that the research is not relevant and interesting for EA; I think it is
Other examples: Animal welfare
Caviola et al; Humans First: Why people value animals less than humans
I’ve been emphasizing work in economics, perhaps because of my background, but work in psychology and other social sciences will also be relevant
Van Loo et al, 2020, Consumer preferences for farm-raised meat, lab-grown meat, and plant-based meat alternatives: Does information or brand matter?; Food Policy
Obvious relevance for animal-welfare interventions and charities
Empirical (national discrete choice experiment/survey). Jason Lusk’s work is often cited and recommended.
Yes, it is published, but ‘Food Policy’ seems like a rather specific field journal, perhaps. This may not have been given the careful feedback and assessment it deserves, because it may have been seen as a niche issue by mainstream economists.
Other examples: Long-termism and existential risk
Denkenberger and Pearce; Cost-Effectiveness of Interventions for Alternate Food to Address Agricultural Catastrophes Globally.
AllFed is one of the most concrete intervention and organization associated with extinction risk and longtermism.
The work is based on Monte-Carlo Fermi estimation, I believe. The authors are engineers; this could probably use feedback from an economist, policy analyst, or business academic.
Grace et al; When will AI exceed human performance? Evidence from AI experts
I don’t want the Unjournal to take on technical AI issues yet, but this is about ‘aggregating uncertainty’ from experts on a crucial issue, and it seems in the quant/econ/social science wheelhouse.
Other examples: Global Health and development
General equilibrium effects of cash transfers: experimental evidence from Kenya—suggested by a forum reader.
Other examples: Improving institutions and public goods provision
Liberal radicalism: a flexible design for philanthropic matching funds—Vitalik Buterin Zoë Hitzig and E. Glen Weyl
link
Why: Quadratic voting and schemes like these are often cited/advocated as game-changers, including in EA and rationalist circles, I think. The authors are fairly prominent,
but the paper doesn’t seem to have ‘made it through peer review’. Maybe it is seen too much as advocacy?[7]Update/correction: The paper was published in Management Science under another title
So does it make sense, what are counter-arguments and vulnerabilities, has/does it work in practice?
Update: The authors reached out to me. Gitcoin has done some experimentation on this mechanism, and there may be future work to consider and evaluate.
explained below, with full details and T&C in this Google Doc
Especially considering empirical, quantitative, and applied work in economics, social science and impact evaluation.
mentioning the “Unjounal prize”, send the letter to Rethink Priorities, 530 Divisadero St. PMB #796, San Francisco, California 94117, USA
Although it is such a big question and ambitious topic, more feedback, assessment and public debate would be very helpful.
But the interactive site and tool cannot be ‘peer reviewed’ in a standard way, and can’t easily be given career rewards … that’s part of where the Unjournal comes in, hopefully.
“Appendix A provides details behind calculations showing that if PCV coverage in GAVI countries converged to the global rate at the slower rate of the rotavirus vaccine in Figure 2, 67 million fewer children under age 1 would have been immunized, amounting to a loss of over 12 million DALYs.
I remember being told as a PhD student something like ‘Economists analyze markets and behavior, we don’t propose policies’