Mandatory field 200 characters summarizing the blogpost.
Mandatory keywords box.
Better Google Docs integration.
Thank you for the detailed write-ups.
I will focus on where I disagree with the the Chris Chambers / Registered Reports grant (note: this is Let’s Fund’s grantee, the organization I co-founded).
“Chambers has the explicit goal of making all clinical trials require the use of registered reports. That outcome seems potentially quite harmful, and possibly worse than the current state of clinical science.”
I think, if all clinical trials became Registered Reports, then there’d be net benefits.
In essence, if you agree that all clinical trials should be preregistered, then Registered reports is merely preregistration taken to its logical conclusion by being more stringent (i.e. peer-reviewed, less vague etc.).
Relevant quote from the Let’s Fund report (Lets-Fund.org/Better-Science):
“The principal differences between pre-registration and Registered Reports are:
In pre-registration, trial outcomes or dependent variables and the way of analyzing them are not described as precisely as could be done in a paper
Pre-registration is not peer-reviewed
Pre-registration also often does not describe the theory that is being tested.
For the reason, simple pre-registration might not be as good as Registered Reports. For instance, in cancer trials, the descriptions of what will be measured are often of low quality i.e. vague, leading to ‘outcome switching’ (i.e. switching between planned and published outcomes) , . Moreover, data processing can often involve very many seemingly reasonable options for excluding or transforming data, which can then be used for data dredging pre-registered trials (“With 20 binary choices, 220 = 1,048,576 different ways exist to analyze the same data.” ). Theoretically, preregistration could be more exhaustive and precise, but in practice, it rarely is, because it is not peer-reviewed.”
Also, note that exploratory analysis can still be used in Registered Reports, if it’s clearly labelled as exploratory.
“Ultimately, from a value of information perspective, it is totally possible for a study to only be interesting if it finds a positive result, and to be uninteresting when analyzed pre-publication from the perspective of the editor.“
Generally, a scientist’s priors regarding the likelihood of treatment being successful should be roughly proportional to the value of information. In other words, if the likelihood that a treatment is successful is trivially low, then it is likely too expensive to be worth running or will increase the false positive rate.
On bandwidth constraints: this seems now largely a historical artifact from pre-internet days, when journals only had limited space and no good search functionality. Back then, it was good that you had a journal like Nature that was very selective and focused on positive results. These days, we can publish as many high-quality null-result papers online in Nature as we want to without sacrifice, because people don’t read a dead tree copy of Nature front to back. Scientists now solve the bandwidth constraint differently (e.g. internet keyword searches, how often a paper is cited, and whether their colleagues on social media share it).
In your example, you can combine all 100 potential treatments into one paper and then just report whether it worked or not. The cost of reporting that a study was carried out are trivial compared to others. If the scientist doesn’t believe any results are worth reporting they can just not report them, and we will still have the record of what was attempted (similar to it being good that we can see unpublished preregistrations on trials.gov that never went anywhere as data on the size of publication bias).
“Because of dynamics like this, I think it is very unlikely that any major journals will ever switch towards only publishing registered report-based studies, even within clinical trials, since no journal would want to pass up on the opportunity to publish a study that has the opportunity to revolutionize the field.”
This is traded-off by top journals publishing biased results (which follows directly from auction theory where the highest bidder is more likely to pay more than the true price; similarly, people who publish in Nature will be more likely to overstate their results. This is borne out empirically. See https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0050201)
Registered Reports are simply more trustworthy and this might change the dynamics so that there’ll be pressure for journals to adopt the registered Reports format or fall behind in terms of impact factor.
“As a result, large parts of the paper basically have no selection applied to them for conceptual clarity,”
On clarity: Registered reports will have more clarity because they’re more theoretically motivated (see https://lets-fund.org/better-science/#h.n85wl9bxcln4) and the reviewers, instead of being impressed by results, are judging papers more on how detailed and clear the methodology is described. This might aid replication attempts and will likely also be a good proxy of the clarity of the conclusion. Scientists are still incentivized to write good conclusions, because they want their work to be cited. Also, the importance of the conclusion will be deemphasized. In the optimal case of a RR, “ a comprehensive and analytically sophisticated design, vetted down to each single line of code by the reviewers before data collection began,” https://www.nature.com/articles/s41562-019-0652-0 is what happens during the review.
What is missing from the results section is pretty much only the final numbers that are plugged in after review and data collection and the result section then “writes itself”. The conclusion section is perhaps almost unnecessary, if the introduction already motivates the implications of the research results and is already used as a more extensive speculative summary in many papers.
I think the conclusion section will be quite short and not very important section in registered reports as is increasingly the case (in Nature, there’s sometimes no “redundant” conclusion section).
>>Excessive red tape in clinical research seems like one of the main problems with medical science today
I don’t think excessive red tape is one of the main problems with medical science (say on the same level of publication bias), that there are no benefits of IRBs, nor that Registered Reports adds red tape or has much to do with the issue you cite. I think a much bigger problem is research waste as outlined in the Let’s Fund report.
Most scientists who publish Registered Reports describe the publication experience as quite pleasant with a bit of front-loaded work (see e.g. https://twitter.com/Prolific/status/1153286158983581696). In my view, the benefits far outweigh the costs.
On differential tech development and perhaps as an aside: note that more reliable science has wide-ranging consequences for many other cause areas in EA. Not only global development has had problems with replicability (e.g. https://blogs.worldbank.org/impactevaluations/pre-results-review-journal-development-economics-lessons-learned-so-far and the “worm wars”), but also areas related to GBCRs (e.g. there’s a new Registered Reports initiative for research on Influenza see https://cos.io/our-services/research/flu-lab/).
I’d be interested in seeing views/ hits counters on every post and general data on traffic.
Also quadratic voting for upvotes.
Thank you for your kind words Sam.
Of course, philosophically, pure time discounting is wrong, but:
“Another reason to discount is that far future benefits are more speculative, and changes to the world in the meantime can disrupt your project or make it irrelevant. For example, a vaccine development project that hopes to deliver a vaccine in a few decades faces a higher risk of being defunded or the disease in question disappearing, than does a similar project that expects to deliver a vaccine in a matter of years. This is a good reason to discount future benefits and costs, but the appropriate rate will vary dramatically depending on what you are looking at, and will not necessarily be the same every year into the future.”
The social cost of carbon is generally highly sensitive to the pure rate of time preference.
“The national social costs of carbon of faster growing economies are less sensitive to the pure rate of time preference and more sensitive to the rate of risk aversion” from Tol
and from the Ricke paper:
“CSCCs were calculated using both exogenous and endogenous9 discounting. For conventional exogenous discounting, two discount rates were used, 3 and 5%. the results under endogenous discounting were calculated using two rates of pure time preference (ρ=1, 2%) and two values of elasticity of marginal utility of consumption (μ=0.7, 1.5) for four endogenous discounting parameterizations.”
So maybe it’s not highly sensitive to just discounting anymore.
But both the Ricke and the Tol paper use sensitivity analyses on their SCC and different parameterizations make SCC 10s to 1000s of $ per tonne and I guess they’ll use “sensible” ranges for this.
I would love others to look into this more as well and could well imagine new research uncovering facts that would dominate this analysis.
I think it’s a mixture of the following:
1. African countries are relatively small
2. the social cost of carbon measures the cost to GDP—and if your GDP is not very big to start with then there’s not a big cost to you.
It is quite unintuitive/ disconcerting that the official social cost of carbon for the DRC (one of the poorest countries, 80 million people, close to the equator and particularly affected by climate change), only has a social cost of carbon of 30 cents per tonne, whereas the US has one of $40 - see:
As I said above, there are contributors to (true) social cost of carbon not fully captured by empirical, macroeconomic damage functions, and their likely impacts on the social cost of carbon (see Table S5 in the paper’s supplementary material and Table 1 in). For instance:
Adjustment costs (short-term costs of adaptation)
Non-market damages (biodiversity loss, cultural losses, etc.)
Tipping points in the climate system (catastrophic climate events, hysteresis etc.)
High inertia effects of CO2 (ocean acidification, sea level rise)
General equilibrium effects (spillover, trade, etc.)
Macro-scale adaptation (long-term restructuring of economy)
Political instability and violent conflicts
Large migration flows
More extreme weather and natural disasters
Bresler finds that explicitly accounting for climate mortality costs triples the welfare costs of climate change.
The highest social cost of carbon estimate in the literature is on the same order of magnitude ($1687), and the highest figure amongst many in a recently published paper find that for 6 degrees of warming the cost will be (which has a substantial probability) is $21889 / per tonne) 
That’s why in the pessimistic version of my model increased the SCC by 10x (higher than most estimates).
We got this one, which was quite cheap:
The changes have been very substantial, because the first version was a much more simple model.
The main difference are that the first version had an error as pointed out by AGB in the comments.
Here’s a comparison doc between the two version:
Thank you- I’ve now included this in my model:
“Some global development interventions have been estimated to be 17.5x more effective than cash-transfers (e.g. deworming). We use this as the optimistic case.”
Also, I agree that climate modelling is very uncertain, but we should not throw out the baby with the bathwater.
Quote from my analysis above:
“one study estimated a lower bound of the global social cost of carbon at US$125 and argues that:
“Quantifying the true SCC value is complicated because of various difficult-to-quantify damage cost categories and the interaction of discounting, uncertainty, large damages and risk aversion [...] The best that can be offered is a lower bound based comes from a conservative meta-estimate that aggregates studies using high and low discount rates, it does not account for various climate change damages owing to a lack of reliable information, and it does not consider a minimax regret argument addressing damages associated with extreme climate change.”
Also, as an aside, outside of prioritization, for optimal policy (e.g. carbon pricing) the social cost of carbon should be:
Set to the marginal abatement cost, which can be optimal and easier to estimate. or
Set to err on the side of overestimating externalities (while reducing other non-Pigovian taxes).”
Also I now include more optimistic estimates (in the sense that the SSC won’t be that high) in my sensitivity analysis.
Thank you for this comment. Relevant quote from my updated analysis above:
“The new paper’s social cost of carbon figure is controversial and has been criticized for being too high for various methodological reasons. For instance, one very critical new paper also now estimates the social cost of carbon on a country-level, suggesting that the global social cost of carbon is only $24 (and, using various sensitivity analyses, values ranging from $3.38/tCO2e to $21,889/tCO2e).
To account for the new paper overestimating or underestimating the social cost of carbon, below, we use sensitivity analysis to show how our model responds to over- or underestimating the true social cost of carbon by 10x.”
Thanks for catching this mistake.
I’ve updated the analysis to reflect this.
I emailed the authors but they didn’t reply.
But I think the social cost of carbon figures should generally be interpreted as current US dollars. They are then discounted for decreasing returns to consumption for future people who live in countries with higher consumption.
So we should divide the $417 figure by the 100x multiplier (or more, see my sensitivity analysis).
The OECD is the main source for data on Official Development Assistance- see
The World Bank also has some interesting data:
I’m the co-founder of Lets-Fund.org.
We do independent, in-depth research to help foundations and individuals to donate to the most effective policies to solve today’s most important global challenges (e.g. the replication crisis, climate change).
One of our two campaigns is on improving all of hypothesis-driven science by implementing a new publication format called Registered Reports.
You can find our in-depth write-up on this here:
We’ve already crowdfunded $75,000 for this campaign (this includes a grant recommendation that the EA Long-term Future fund is currently very strongly considering), but I believe the grantee can productively absorb more money. This would fund a teaching buyout for the grantee, Professor Chris Chambers (who happens to be a fellow Aussie!).
Chris Chambers has already hired assistants to push his Registered Reports advocacy forward and I’m exceedingly excited about his work. There was an editorial in Nature about it a few weeks ago, but in brief, he has implemented the new Registered Reports publication format at more than 200 journals now (including PLoS Biology, a top biology journal). More promisingly, he is also lobbying PNAS, generally considered to be the best journal after Nature and Science, to implement the format through an open letter signed by 250 other scientists.
If he is successful and scientists realize that they can get a publication in PNAS just by submitting a paper with exceptional methodology, but independent on whether the results are positive, then it might cascade into changing science in a fundamental way in the near future. I think Chris is very driven and can productively use additional funds.
There has also been some recent interest in global development about Registered Reports (see: https://blogs.worldbank.org/impactevaluations/pre-results-review-journal-development-economics-lessons-learned-so-far ) so I feel like this could prevent another Worm Wars situation (see https://blog.givewell.org/2017/12/07/questioning-evidence-hookworm-eradication-american-south/ ) , and so this grant opportunity would also work for someone interested in improving global development.
Let me know if you have any questions at (happy to jump on a call):
just came across this—deadline has already passed but perhaps this might still be useful:
In support of the administration’s artificial intelligence R&D strategy, the White House Office of Management and Budget is accepting public comments on ways to improve the accessibility and quality of relevant federal datasets and models. OMB notes that domain areas of particular interest include weather forecasting, manufacturing, agriculture, and national security, among others. Comments are due Aug. 9.
Generally I really find this research agenda interesting. I have only skimmed this post, but I also like your analysis the way you go about it.
As it turns out, LSD, psilocybin, and DMT all get rid of Cluster Headaches in a majority of sufferers. Given the safety profile of these agents, it is insane to think that there are millions of people suffering needlessly from this condition who could be nearly-instantly cured with something as simple as growing and eating some magic mushrooms.
I think this is hyperbole. I reviewed the literature a while ago, and while I do agree that there is some suggestive evidence that this is true, I do not think that it is so strong as to warrant the claims you make and there are many qualifications. Also, I think you should cite the relevant studies on this subject (https://scholar.google.com/scholar?as_ylo=2015&q=LSD+cluster+headaches&hl=en&as_sdt=0,5).
Also see Linch’s excellent summary of the philosophy paper “The possibility of ongoing moral catastrophe”:
This post got me wondering whether there should be Glassdoor for EA.
Yes, this is an intuition I had as well.
Carbon tariffs (or border carbon adjustments) might prevent some, but not all, carbon leakage and reduce emissions. But they are quite difficult to calculate (calculating the carbon intensity of every imported good) and might lower trade flows and welfare, especially in emerging economies.
Generally, I thought there was surprisingly little research on carbon tariffs, even though, as your intuition shows, they should go hand in hand with carbon taxes.
Crucially, even if we were to have perfect carbon taxes and tariffs, UK emissions only make up 3% and shrinking of the global total.