Report on the Desirability of Science Given New Biotech Risks

Should we seek to make our scientific institutions more effective? On the one hand, rising material prosperity has so far been largely attributable to scientific and technological progress. On the other hand, new scientific capabilities also expand our powers to cause harm. Last year I wrote a report on this issue, “The Returns to Science in the Presence of Technological Risks.” The report focuses specifically on the net social impact of science when we take into account the potential abuses of new biotechnology capabilities, in addition to benefits to health and income.

The main idea of the report is to develop an economic modeling framework that lets us tally up the benefits of science and weigh them against future costs. To model costs, I start with the assumption that, at some future point, a “time of perils” commences, wherein new scientific capabilities can be abused and lead to an increase in human mortality (possibly even human extinction). In this modeling framework, we can ask if we would like to have an extra year of science, with all the benefits it brings, or an extra year’s delay to the onset of this time of perils. Delay is good in this model, because there is some chance we won’t end up having to go through the time of perils at all.

I rely on historical trends to estimate the plausible benefits to science. To calibrate the risks, I use various forecasts made in the Existential Risk Persuasion tournament, which asked a large number of superforecasters and domain experts several questions closely related to the concerns of this report. So you can think of the model as helping assess whether the historical benefits of science outweigh one set of reasonable (in my view) forecasts of risks.

What’s the upshot? From the report’s executive summary:

A variety of forecasts about the potential harms from advanced biotechnology suggest the crux of the issue revolves around civilization-ending catastrophes. Forecasts of other kinds of problems arising from advanced biotechnology are too small to outweigh the historic benefits of science. For example, if the expected increase in annual mortality due to new scientific perils is less than 0.2-0.5% per year (and there is no risk of civilization-ending catastrophes from science), then in this report’s model, the benefits of science will outweigh the costs. I argue the best available forecasts of this parameter, from a large number of superforecasters and domain experts in dialogue with each other during the recent existential risk persuasion tournament, are much smaller than these break-even levels. I show this result is robust to various assumptions about the future course of population growth and the health effects of science, the timing of the new scientific dangers, and the potential for better science to reduce risks (despite accelerating them).

On the other hand, once we consider the more remote but much more serious possibility that faster science could derail advanced civilization, the case for science becomes considerably murkier. In this case, the desirability of accelerating science likely depends on the expected value of the long-run future, as well as whether we think the forecasts of superforecasters or domain experts in the existential risk persuasion tournament are preferred. These forecasts differ substantially: I estimate domain expert forecasts for annual mortality risk are 20x superforecaster estimates, and domain expert forecasts for annual extinction risk are 140x superforecaster estimates. The domain expert forecasts are high enough, for example, that if we think the future is “worth” more than 400 years of current social welfare, in one version of my model we would not want to accelerate science, because the health and income benefits would be outweighed by the increases in the remote but extremely bad possibility that new technology leads to the end of human civilization. However, if we accept the much lower forecasts of extinction risks from the superforecasters, then we would need to put very very high values on the long-run future of humanity to be averse to risking it.

Throughout the report I try to neutrally cover different sets of assumptions, but the report’s closing section details my personal views on how we should think about all this, and I thought I would end the post with those views (the following are my views, not necessarily Open Philanthropy’s).

My Take

I end up thinking that better/​faster science is very unlikely to be bad on net. As explained in the final section of the report, this is mostly on the back of three rationales. First, for a few reasons I think lower estimates of existential risk from new biotechnology are probably closer to the mark than more pessimistic ones. Second, I think it’s plausible that dangerous biotech capabilities will be unlocked at some point in the future regardless of what happens to our scientific institutions (for example because they have already been discovered or because advances in AI from outside mainstream scientific institutions will enable them). Third, I think there are reasonable chances that better/​faster science will reduce risks from new biotechnology in the long run, by discovering effective countermeasures faster.

In my preferred model, investing in science has a social impact of 220x, as measured in Open Philanthropy’s framework. In other terms, investing a dollar in science has the same impact on aggregate utility as giving a dollar each to 220 different people earning $50,000/​yr. With science, this benefit is realized by increasing a much larger set of people’s incomes by a very small but persistent amount, potentially for generations to come.

That said, while I think it is very unlikely that science is bad on net, I do not think it is so unlikely that these concerns can be dismissed. Moreover, even if the link between better/​faster science and increased peril is weak and uncertain, the risks from increased peril are large enough to warrant their own independent concern. My preferred policy stance, in light of this, is to separately and in parallel pursue reforms that accelerate science and reforms that reduce risks from new technologies, without worrying too much about their interaction (with some likely rare exceptions).

It’s a big report (74 pages in the main report, 119 pages with appendices) and there’s a lot more in it that might be of interest to some people. For a more detailed synopsis, check out the executive summary, the table of contents, and the summary at the beginning of section 11. For some intuition about the quantitative magnitudes the model arrives at, section 3.0 has a useful parable. You can read the whole thing on arxiv.