Differential technological development

This piece is a summary and introduction to the concept of differential technological development, written by hashing together existing writings.

Differential technological development

Differential technological development is a science and technology strategy to:

“Retard the development of dangerous and harmful technologies, especially ones that raise the level of existential risk; and accelerate the development of beneficial technologies, especially those that reduce the existential risks posed by nature or by other technologies.” (Bostrom, The Vulnerable World Hypothesis, 2019)

We might worry that trying to affect the progress of technology is futile, since if a technology is feasible then it will eventually be developed. Bostrom discusses (though rejects) this kind of argument:

“Suppose that a policymaker proposes to cut funding for a certain research field, out of concern for the risks or long-term consequences of some hypothetical technology that might eventually grow from its soil. She can then expect a howl of opposition from the research community. Scientists and their public advocates often say that it is futile to try to control the evolution of technology by blocking research. If some technology is feasible (the argument goes) it will be developed regardless of any particular policymaker’s scruples about speculative future risks. Indeed, the more powerful the capabilities that a line of development promises to produce, the surer we can be that somebody, somewhere, will be motivated to pursue it. Funding cuts will not stop progress or forestall its concomitant dangers.”[1] (Bostrom, Superintelligence, pp. 228, Chapter 14, 2014)

Let’s call such an argument the ‘technological completion conjecture’, where with continued scientific and technological development efforts, all relevant technologies will eventually be developed:

Technological completion conjecture “If scientific and technological development efforts do not effectively cease, then all important basic capabilities that could be obtained through some possible technology will be obtained.” (Bostrom, Superintelligence, pp. 228, Chapter 14, 2014)

Nevertheless, the principle of differential technological development is compatible with plausible forms of technological determinism. Even given this form of technological determinism ’it could still make sense to attempt to influence the direction of technological research. What matters is not only whether a technology is developed, but also when it is developed, by whom, and in what context. These circumstances of birth of a new technology, which shape its impact, can be affected by turning funding spigots on or off (and by wielding other policy instruments). These reflections suggest a principle that would have us attend to the relative speed with which different technologies are developed.” (Bostrom, Superintelligence, pp. 228, Chapter 14, 2014)

Let’s consider some examples of how we might use the differential technological development framework, where we try to affect ‘the rate of development of various technologies and potentially the sequence in which feasible technologies are developed and implemented’. Recall that our focus is on ‘trying to retard the implementation of dangerous technologies and accelerate implementation of beneficial technologies, especially those that ameliorate the hazards posed by other technologies.’ (Bostrom, Existential Risks, 2002)

“In the case of nanotechnology, the desirable sequence [of technological development] would be that defense systems are deployed before offensive capabilities become available to many independent powers; for once a secret or a technology is shared by many, it becomes extremely hard to prevent further proliferation. In the case of biotechnology, we should seek to promote research into vaccines, anti-bacterial and anti-viral drugs, protective gear, sensors and diagnostics, and to delay as much as possible the development (and proliferation) of biological warfare agents and their vectors. Developments that advance offense and defense equally are neutral from a security perspective, unless done by countries we identify as responsible, in which case they are advantageous to the extent that they increase our technological superiority over our potential enemies. Such “neutral” developments can also be helpful in reducing the threat from natural hazards and they may of course also have benefits that are not directly related to global security.

Some technologies seem to be especially worth promoting because they can help in reducing a broad range of threats. Superintelligence is one of these. [Editor’s comment: By a “superintelligence” we mean an intellect that is much smarter than the best human brains in practically every field, for example, advanced domain-general artificial intelligence of the sort that companies such as DeepMind and OpenAI are working towards.] Although it has its own dangers (expounded in preceding sections), these are dangers that we will have to face at some point no matter what. But getting superintelligence early is desirable because it would help diminish other risks. A superintelligence could advise us on policy. Superintelligence would make the progress curve for nanotechnology much steeper, thus shortening the period of vulnerability between the development of dangerous nanoreplicators and the deployment of adequate defenses. By contrast, getting nanotechnology before superintelligence would do little to diminish the risks of superintelligence.

...

Other technologies that have a wide range of risk-reducing potential include intelligence augmentation, information technology, and surveillance. These can make us smarter individually and collectively, and can make it more feasible to enforce necessary regulation. A strong prima facie case therefore exists for pursuing these technologies as vigorously as possible.

As mentioned, we can also identify developments outside technology that are beneficial in almost all scenarios. Peace and international cooperation are obviously worthy goals, as is cultivation of traditions that help democracies prosper.” (Bostrom, Existential Risks, 2002)

Differential technological development vs speeding up growth

We might be sceptical of differential technological development, and aim instead to generally increase the speed of technological development. An argument for this might go:

“Historically, technological, economic, and social progress have been associated with significant gains in quality of life and significant improvement in society’s ability to cope with challenges. All else equal, these trends should be expected to continue, and so contributions to technological, economic, and social progress should be considered highly valuable.” (Paul_Christiano, On Progress and Prosperity—EA Forum)

However, we should expect that ‘economic, technological, and social progress are limited, and that material progress on these dimensions must stop long before human society has run its course.’ Growth can’t continue indefinitely, due to the natural limitations of resources available to us in the universe. ‘So while further progress today increases our current quality of life, it will not increase the quality of life of our distant descendants—they will live in a world that is “saturated”, where progress has run its course and has only very modest further effects.’ (Paul_Christiano, On Progress and Prosperity—EA Forum)

‘While progress has a modest positive effect on long-term welfare, this effect is radically smaller than the observed medium-term effects, and in particular much smaller than differential progress. Magically replacing the world of 1800 with the world of 1900 would make the calendar years 1800-1900 a lot more fun, but in the long run all of the same things happen (just 100 years sooner).’ (Paul_Christiano, On Progress and Prosperity—EA Forum) With this long-term view in mind, the benefits of speeding up technological development are capped.

Nevertheless, there are arguments that speeding up growth might still have large benefits, both for improving long-term welfare, and perhaps also for reducing existential risks. For debate on the long-term value of economic growth check out this podcast episode with Tyler Cowen. (80,000 Hours—Problem profiles) See the links for more details on these arguments.

Footnotes

[1] “Interestingly, this futility objection is almost never raised when a policymaker proposes to increase funding to some area of research, even though the argument would seem to cut both ways. One rarely hears indignant voices protest: “Please do not increase our funding. Rather, make some cuts. Researchers in other countries will surely pick up the slack; the same work will get done anyway. Don’t squander the public’s treasure on domestic scientific research!”

What accounts for this apparent doublethink? One plausible explanation, of course, is that members of the research community have a self-serving bias which leads us to believe that research is always good and tempts us to embrace almost any argument that supports our demand for more funding. However, it is also possible that the double standard can be justified in terms of national self interest. Suppose that the development of a technology has two effects: giving a small benefit B to its inventors and the country that sponsors them, while imposing an aggregately larger harm H—which could be a risk externality—on everybody. Even somebody who is largely altruistic might then choose to develop the overall harmful technology. They might reason that the harm H will result no matter what they do, since if they refrain somebody else will develop the technology anyway; and given that total welfare cannot be affected, they might as well grab the benefit B for themselves and their nation. (“Unfortunately, there will soon be a device that will destroy the world. Fortunately, we got the grant to build it!”)” (Bostrom, Superintelligence, pp. 228, Chapter 14, 2014)