Differential technological development
This piece is a summary and introduction to the concept of differential technological development, written by hashing together existing writings.
Differential technological development
Differential technological development is a science and technology strategy to:
“Retard the development of dangerous and harmful technologies, especially ones that raise the level of existential risk; and accelerate the development of beneficial technologies, especially those that reduce the existential risks posed by nature or by other technologies.” (Bostrom, The Vulnerable World Hypothesis, 2019)
We might worry that trying to affect the progress of technology is futile, since if a technology is feasible then it will eventually be developed. Bostrom discusses (though rejects) this kind of argument:
“Suppose that a policymaker proposes to cut funding for a certain research field, out of concern for the risks or long-term consequences of some hypothetical technology that might eventually grow from its soil. She can then expect a howl of opposition from the research community. Scientists and their public advocates often say that it is futile to try to control the evolution of technology by blocking research. If some technology is feasible (the argument goes) it will be developed regardless of any particular policymaker’s scruples about speculative future risks. Indeed, the more powerful the capabilities that a line of development promises to produce, the surer we can be that somebody, somewhere, will be motivated to pursue it. Funding cuts will not stop progress or forestall its concomitant dangers.”[1] (Bostrom, Superintelligence, pp. 228, Chapter 14, 2014)
Let’s call such an argument the ‘technological completion conjecture’, where with continued scientific and technological development efforts, all relevant technologies will eventually be developed:
Technological completion conjecture “If scientific and technological development efforts do not effectively cease, then all important basic capabilities that could be obtained through some possible technology will be obtained.” (Bostrom, Superintelligence, pp. 228, Chapter 14, 2014)
Nevertheless, the principle of differential technological development is compatible with plausible forms of technological determinism. Even given this form of technological determinism ’it could still make sense to attempt to influence the direction of technological research. What matters is not only whether a technology is developed, but also when it is developed, by whom, and in what context. These circumstances of birth of a new technology, which shape its impact, can be affected by turning funding spigots on or off (and by wielding other policy instruments). These reflections suggest a principle that would have us attend to the relative speed with which different technologies are developed.” (Bostrom, Superintelligence, pp. 228, Chapter 14, 2014)
Let’s consider some examples of how we might use the differential technological development framework, where we try to affect ‘the rate of development of various technologies and potentially the sequence in which feasible technologies are developed and implemented’. Recall that our focus is on ‘trying to retard the implementation of dangerous technologies and accelerate implementation of beneficial technologies, especially those that ameliorate the hazards posed by other technologies.’ (Bostrom, Existential Risks, 2002)
“In the case of nanotechnology, the desirable sequence [of technological development] would be that defense systems are deployed before offensive capabilities become available to many independent powers; for once a secret or a technology is shared by many, it becomes extremely hard to prevent further proliferation. In the case of biotechnology, we should seek to promote research into vaccines, anti-bacterial and anti-viral drugs, protective gear, sensors and diagnostics, and to delay as much as possible the development (and proliferation) of biological warfare agents and their vectors. Developments that advance offense and defense equally are neutral from a security perspective, unless done by countries we identify as responsible, in which case they are advantageous to the extent that they increase our technological superiority over our potential enemies. Such “neutral” developments can also be helpful in reducing the threat from natural hazards and they may of course also have benefits that are not directly related to global security.
Some technologies seem to be especially worth promoting because they can help in reducing a broad range of threats. Superintelligence is one of these. [Editor’s comment: By a “superintelligence” we mean an intellect that is much smarter than the best human brains in practically every field, for example, advanced domain-general artificial intelligence of the sort that companies such as DeepMind and OpenAI are working towards.] Although it has its own dangers (expounded in preceding sections), these are dangers that we will have to face at some point no matter what. But getting superintelligence early is desirable because it would help diminish other risks. A superintelligence could advise us on policy. Superintelligence would make the progress curve for nanotechnology much steeper, thus shortening the period of vulnerability between the development of dangerous nanoreplicators and the deployment of adequate defenses. By contrast, getting nanotechnology before superintelligence would do little to diminish the risks of superintelligence.
...
Other technologies that have a wide range of risk-reducing potential include intelligence augmentation, information technology, and surveillance. These can make us smarter individually and collectively, and can make it more feasible to enforce necessary regulation. A strong prima facie case therefore exists for pursuing these technologies as vigorously as possible.
As mentioned, we can also identify developments outside technology that are beneficial in almost all scenarios. Peace and international cooperation are obviously worthy goals, as is cultivation of traditions that help democracies prosper.” (Bostrom, Existential Risks, 2002)
Differential technological development vs speeding up growth
We might be sceptical of differential technological development, and aim instead to generally increase the speed of technological development. An argument for this might go:
“Historically, technological, economic, and social progress have been associated with significant gains in quality of life and significant improvement in society’s ability to cope with challenges. All else equal, these trends should be expected to continue, and so contributions to technological, economic, and social progress should be considered highly valuable.” (Paul_Christiano, On Progress and Prosperity—EA Forum)
However, we should expect that ‘economic, technological, and social progress are limited, and that material progress on these dimensions must stop long before human society has run its course.’ Growth can’t continue indefinitely, due to the natural limitations of resources available to us in the universe. ‘So while further progress today increases our current quality of life, it will not increase the quality of life of our distant descendants—they will live in a world that is “saturated”, where progress has run its course and has only very modest further effects.’ (Paul_Christiano, On Progress and Prosperity—EA Forum)
‘While progress has a modest positive effect on long-term welfare, this effect is radically smaller than the observed medium-term effects, and in particular much smaller than differential progress. Magically replacing the world of 1800 with the world of 1900 would make the calendar years 1800-1900 a lot more fun, but in the long run all of the same things happen (just 100 years sooner).’ (Paul_Christiano, On Progress and Prosperity—EA Forum) With this long-term view in mind, the benefits of speeding up technological development are capped.
Nevertheless, there are arguments that speeding up growth might still have large benefits, both for improving long-term welfare, and perhaps also for reducing existential risks. For debate on the long-term value of economic growth check out this podcast episode with Tyler Cowen. (80,000 Hours—Problem profiles) See the links for more details on these arguments.
Footnotes
[1] “Interestingly, this futility objection is almost never raised when a policymaker proposes to increase funding to some area of research, even though the argument would seem to cut both ways. One rarely hears indignant voices protest: “Please do not increase our funding. Rather, make some cuts. Researchers in other countries will surely pick up the slack; the same work will get done anyway. Don’t squander the public’s treasure on domestic scientific research!”
What accounts for this apparent doublethink? One plausible explanation, of course, is that members of the research community have a self-serving bias which leads us to believe that research is always good and tempts us to embrace almost any argument that supports our demand for more funding. However, it is also possible that the double standard can be justified in terms of national self interest. Suppose that the development of a technology has two effects: giving a small benefit B to its inventors and the country that sponsors them, while imposing an aggregately larger harm H—which could be a risk externality—on everybody. Even somebody who is largely altruistic might then choose to develop the overall harmful technology. They might reason that the harm H will result no matter what they do, since if they refrain somebody else will develop the technology anyway; and given that total welfare cannot be affected, they might as well grab the benefit B for themselves and their nation. (“Unfortunately, there will soon be a device that will destroy the world. Fortunately, we got the grant to build it!”)” (Bostrom, Superintelligence, pp. 228, Chapter 14, 2014)
- Let’s think about slowing down AI by 22 Dec 2022 17:40 UTC; 548 points) (LessWrong;
- Let’s think about slowing down AI by 23 Dec 2022 19:56 UTC; 334 points) (
- A note about differential technological development by 15 Jul 2022 4:46 UTC; 196 points) (LessWrong;
- Slightly against aligning with neo-luddites by 26 Dec 2022 22:46 UTC; 104 points) (LessWrong;
- The History, Epistemology and Strategy of Technological Restraint, and lessons for AI (short essay) by 10 Aug 2022 11:00 UTC; 90 points) (
- The case for a negative alignment tax by 18 Sep 2024 18:33 UTC; 77 points) (LessWrong;
- Slightly against aligning with neo-luddites by 26 Dec 2022 23:27 UTC; 71 points) (
- There Should Be More Alignment-Driven Startups by 31 May 2024 2:05 UTC; 60 points) (LessWrong;
- A note about differential technological development by 24 Jul 2022 23:41 UTC; 58 points) (
- Prospecting for Gold (Owen Cotton-Barratt) by 18 Nov 2016 12:11 UTC; 44 points) (
- AI Safety Endgame Stories by 28 Sep 2022 17:12 UTC; 31 points) (
- Preliminary investigations on if STEM and EA communities could benefit from more overlap by 11 Apr 2023 16:08 UTC; 31 points) (
- AI Safety Endgame Stories by 28 Sep 2022 16:58 UTC; 31 points) (LessWrong;
- Digital people could make AI safer by 10 Jun 2022 15:29 UTC; 24 points) (
- ‘Existential Risk and Growth’ Deep Dive #3 - Extensions and Variations by 20 Dec 2020 12:39 UTC; 5 points) (
- 21 Mar 2023 2:51 UTC; 3 points) 's comment on AMA: Allison Duettmann, Foresight Institute by (Progress Forum;
- 19 Oct 2021 19:05 UTC; 1 point) 's comment on eca’s Quick takes by (
I wrote this up because I wanted a single resource I could send to people that explained differential technological development.
I made it quite quickly in about 1 hour, so I’m sure it’s quite lacking and would appreciate any comments and suggestions people may have to improve it. You can also comment on a GDoc version of this here: https://docs.google.com/document/d/1HcLcu-WObHO8y45yEMICfmqNpeugbmUX9HdRfeu7foM/edit?usp=sharing
Just want to say that I like it when people (a) try to create nice, quick summaries that can be sent to people or linked to in other things,[1] and (b) take a quite iterative approach to posting on the forum, where the author continues to solicit feedback and make edits even after posting.
On (b), I’ve often appreciated input on my posts from commenters on the EA Forum and LessWrong, and felt that it helped me improve posts in ways that I likely wouldn’t have thought of if I’d just sat on the post for a few more weeks, trying to think of more improvements myself. (Though obviously it’s also possible to get some of this before posting, via sharing Google Docs.)
[1] EA Concepts already partly serves this role, and is great, but there are concepts it doesn’t cover, and those that it does cover it covers very briefly and in a slightly out-of-date way.
Nice, concise summary!
I’ve previously made a collection of all prior works I’ve found that explicitly use the terms differential progress / intellectual progress / technological development. You or readers may find some of those works interesting. I’ve also now added this post to that collection.
I also just realised that that collection was missing Superintelligence, as I’d forgotten that that book discussed the concept of differential technological development. So I’ve now added that. If you or other readers know of other relevant works, please comment about them on that collection :)
Thanks, I also think writing this was a good idea.
This reminded me of arguments that economic growth on Earth would be necessarily diminished by limits of natural resources, which seems to forget that with increasing knowledge we will be able to do more with less resources. E.g. compare how much more value we can get out of a barrel of oil today compared to 200 years ago.
Indeed. Although there is an upper limit still, since there surely is some limit to how much value we can extract from a resource and there are only a finite number of atoms in the universe.
But is that upper limit relevant? If we talk about all the combinations of all atoms in the universe, we certainly cannot conclude that it is a small effect in a long term view of mankind, given how huge that number is.
A more relevant argument would be : if we manage to go 100 years faster, the long-term impact (say in year 10000) would be the difference in welfare between living in 1800 and in 10000 for a population présent in 100years. (For mathematicians, the integral of marginal improvment for the 100years of advance)
Compared to reducing an existential risk, that seems a lower impact, since this impact would be in all the welfare for all the future generations. (The integral of x% of all the welfare)
So the longer in time we look at (assuming we’ll manage in the future to survive), the more important it is to “not screw up” compared to going faster right now, without assuming capping in potential growth.