Differential technological development

This piece is a sum­mary and in­tro­duc­tion to the con­cept of differ­en­tial tech­nolog­i­cal de­vel­op­ment, writ­ten by hash­ing to­gether ex­ist­ing writ­ings.

Differ­en­tial tech­nolog­i­cal development

Differ­en­tial tech­nolog­i­cal de­vel­op­ment is a sci­ence and tech­nol­ogy strat­egy to:

“Re­tard the de­vel­op­ment of dan­ger­ous and harm­ful tech­nolo­gies, es­pe­cially ones that raise the level of ex­is­ten­tial risk; and ac­cel­er­ate the de­vel­op­ment of benefi­cial tech­nolo­gies, es­pe­cially those that re­duce the ex­is­ten­tial risks posed by na­ture or by other tech­nolo­gies.” (Bostrom, The Vuln­er­a­ble World Hy­poth­e­sis, 2019)

We might worry that try­ing to af­fect the progress of tech­nol­ogy is fu­tile, since if a tech­nol­ogy is fea­si­ble then it will even­tu­ally be de­vel­oped. Bostrom dis­cusses (though re­jects) this kind of ar­gu­ment:

“Sup­pose that a poli­cy­maker pro­poses to cut fund­ing for a cer­tain re­search field, out of con­cern for the risks or long-term con­se­quences of some hy­po­thet­i­cal tech­nol­ogy that might even­tu­ally grow from its soil. She can then ex­pect a howl of op­po­si­tion from the re­search com­mu­nity. Scien­tists and their pub­lic ad­vo­cates of­ten say that it is fu­tile to try to con­trol the evolu­tion of tech­nol­ogy by block­ing re­search. If some tech­nol­ogy is fea­si­ble (the ar­gu­ment goes) it will be de­vel­oped re­gard­less of any par­tic­u­lar poli­cy­maker’s scru­ples about spec­u­la­tive fu­ture risks. In­deed, the more pow­er­ful the ca­pa­bil­ities that a line of de­vel­op­ment promises to pro­duce, the surer we can be that some­body, some­where, will be mo­ti­vated to pur­sue it. Fund­ing cuts will not stop progress or fore­stall its con­comi­tant dan­gers.”[1] (Bostrom, Su­per­in­tel­li­gence, pp. 228, Chap­ter 14, 2014)

Let’s call such an ar­gu­ment the ‘tech­nolog­i­cal com­ple­tion con­jec­ture’, where with con­tinued sci­en­tific and tech­nolog­i­cal de­vel­op­ment efforts, all rele­vant tech­nolo­gies will even­tu­ally be de­vel­oped:

Tech­nolog­i­cal com­ple­tion con­jec­ture “If sci­en­tific and tech­nolog­i­cal de­vel­op­ment efforts do not effec­tively cease, then all im­por­tant ba­sic ca­pa­bil­ities that could be ob­tained through some pos­si­ble tech­nol­ogy will be ob­tained.” (Bostrom, Su­per­in­tel­li­gence, pp. 228, Chap­ter 14, 2014)

Nev­er­the­less, the prin­ci­ple of differ­en­tial tech­nolog­i­cal de­vel­op­ment is com­pat­i­ble with plau­si­ble forms of tech­nolog­i­cal de­ter­minism. Even given this form of tech­nolog­i­cal de­ter­minism ’it could still make sense to at­tempt to in­fluence the di­rec­tion of tech­nolog­i­cal re­search. What mat­ters is not only whether a tech­nol­ogy is de­vel­oped, but also when it is de­vel­oped, by whom, and in what con­text. Th­ese cir­cum­stances of birth of a new tech­nol­ogy, which shape its im­pact, can be af­fected by turn­ing fund­ing spi­gots on or off (and by wield­ing other policy in­stru­ments). Th­ese re­flec­tions sug­gest a prin­ci­ple that would have us at­tend to the rel­a­tive speed with which differ­ent tech­nolo­gies are de­vel­oped.” (Bostrom, Su­per­in­tel­li­gence, pp. 228, Chap­ter 14, 2014)

Let’s con­sider some ex­am­ples of how we might use the differ­en­tial tech­nolog­i­cal de­vel­op­ment frame­work, where we try to af­fect ‘the rate of de­vel­op­ment of var­i­ous tech­nolo­gies and po­ten­tially the se­quence in which fea­si­ble tech­nolo­gies are de­vel­oped and im­ple­mented’. Re­call that our fo­cus is on ‘try­ing to re­tard the im­ple­men­ta­tion of dan­ger­ous tech­nolo­gies and ac­cel­er­ate im­ple­men­ta­tion of benefi­cial tech­nolo­gies, es­pe­cially those that ame­lio­rate the haz­ards posed by other tech­nolo­gies.’ (Bostrom, Ex­is­ten­tial Risks, 2002)

“In the case of nan­otech­nol­ogy, the de­sir­able se­quence [of tech­nolog­i­cal de­vel­op­ment] would be that defense sys­tems are de­ployed be­fore offen­sive ca­pa­bil­ities be­come available to many in­de­pen­dent pow­ers; for once a se­cret or a tech­nol­ogy is shared by many, it be­comes ex­tremely hard to pre­vent fur­ther pro­lifer­a­tion. In the case of biotech­nol­ogy, we should seek to pro­mote re­search into vac­cines, anti-bac­te­rial and anti-viral drugs, pro­tec­tive gear, sen­sors and di­ag­nos­tics, and to de­lay as much as pos­si­ble the de­vel­op­ment (and pro­lifer­a­tion) of biolog­i­cal war­fare agents and their vec­tors. Devel­op­ments that ad­vance offense and defense equally are neu­tral from a se­cu­rity per­spec­tive, un­less done by coun­tries we iden­tify as re­spon­si­ble, in which case they are ad­van­ta­geous to the ex­tent that they in­crease our tech­nolog­i­cal su­pe­ri­or­ity over our po­ten­tial en­e­mies. Such “neu­tral” de­vel­op­ments can also be helpful in re­duc­ing the threat from nat­u­ral haz­ards and they may of course also have benefits that are not di­rectly re­lated to global se­cu­rity.

Some tech­nolo­gies seem to be es­pe­cially worth pro­mot­ing be­cause they can help in re­duc­ing a broad range of threats. Su­per­in­tel­li­gence is one of these. [Edi­tor’s com­ment: By a “su­per­in­tel­li­gence” we mean an in­tel­lect that is much smarter than the best hu­man brains in prac­ti­cally ev­ery field, for ex­am­ple, ad­vanced do­main-gen­eral ar­tifi­cial in­tel­li­gence of the sort that com­pa­nies such as Deep­Mind and OpenAI are work­ing to­wards.] Although it has its own dan­gers (ex­pounded in pre­ced­ing sec­tions), these are dan­gers that we will have to face at some point no mat­ter what. But get­ting su­per­in­tel­li­gence early is de­sir­able be­cause it would help diminish other risks. A su­per­in­tel­li­gence could ad­vise us on policy. Su­per­in­tel­li­gence would make the progress curve for nan­otech­nol­ogy much steeper, thus short­en­ing the pe­riod of vuln­er­a­bil­ity be­tween the de­vel­op­ment of dan­ger­ous nanorepli­ca­tors and the de­ploy­ment of ad­e­quate defenses. By con­trast, get­ting nan­otech­nol­ogy be­fore su­per­in­tel­li­gence would do lit­tle to diminish the risks of su­per­in­tel­li­gence.

...

Other tech­nolo­gies that have a wide range of risk-re­duc­ing po­ten­tial in­clude in­tel­li­gence aug­men­ta­tion, in­for­ma­tion tech­nol­ogy, and surveillance. Th­ese can make us smarter in­di­vi­d­u­ally and col­lec­tively, and can make it more fea­si­ble to en­force nec­es­sary reg­u­la­tion. A strong prima fa­cie case there­fore ex­ists for pur­su­ing these tech­nolo­gies as vi­gor­ously as pos­si­ble.

As men­tioned, we can also iden­tify de­vel­op­ments out­side tech­nol­ogy that are benefi­cial in al­most all sce­nar­ios. Peace and in­ter­na­tional co­op­er­a­tion are ob­vi­ously wor­thy goals, as is cul­ti­va­tion of tra­di­tions that help democ­ra­cies pros­per.” (Bostrom, Ex­is­ten­tial Risks, 2002)

Differ­en­tial tech­nolog­i­cal de­vel­op­ment vs speed­ing up growth

We might be scep­ti­cal of differ­en­tial tech­nolog­i­cal de­vel­op­ment, and aim in­stead to gen­er­ally in­crease the speed of tech­nolog­i­cal de­vel­op­ment. An ar­gu­ment for this might go:

“His­tor­i­cally, tech­nolog­i­cal, eco­nomic, and so­cial progress have been as­so­ci­ated with sig­nifi­cant gains in qual­ity of life and sig­nifi­cant im­prove­ment in so­ciety’s abil­ity to cope with challenges. All else equal, these trends should be ex­pected to con­tinue, and so con­tri­bu­tions to tech­nolog­i­cal, eco­nomic, and so­cial progress should be con­sid­ered highly valuable.” (Paul_Chris­ti­ano, On Progress and Pros­per­ity—EA Fo­rum)

How­ever, we should ex­pect that ‘eco­nomic, tech­nolog­i­cal, and so­cial progress are limited, and that ma­te­rial progress on these di­men­sions must stop long be­fore hu­man so­ciety has run its course.’ Growth can’t con­tinue in­definitely, due to the nat­u­ral limi­ta­tions of re­sources available to us in the uni­verse. ‘So while fur­ther progress to­day in­creases our cur­rent qual­ity of life, it will not in­crease the qual­ity of life of our dis­tant de­scen­dants—they will live in a world that is “sat­u­rated”, where progress has run its course and has only very mod­est fur­ther effects.’ (Paul_Chris­ti­ano, On Progress and Pros­per­ity—EA Fo­rum)

‘While progress has a mod­est pos­i­tive effect on long-term welfare, this effect is rad­i­cally smaller than the ob­served medium-term effects, and in par­tic­u­lar much smaller than differ­en­tial progress. Mag­i­cally re­plac­ing the world of 1800 with the world of 1900 would make the cal­en­dar years 1800-1900 a lot more fun, but in the long run all of the same things hap­pen (just 100 years sooner).’ (Paul_Chris­ti­ano, On Progress and Pros­per­ity—EA Fo­rum) With this long-term view in mind, the benefits of speed­ing up tech­nolog­i­cal de­vel­op­ment are capped.

Nev­er­the­less, there are ar­gu­ments that speed­ing up growth might still have large benefits, both for im­prov­ing long-term welfare, and per­haps also for re­duc­ing ex­is­ten­tial risks. For de­bate on the long-term value of eco­nomic growth check out this pod­cast epi­sode with Tyler Cowen. (80,000 Hours—Prob­lem pro­files) See the links for more de­tails on these ar­gu­ments.

Footnotes

[1] “In­ter­est­ingly, this fu­til­ity ob­jec­tion is al­most never raised when a poli­cy­maker pro­poses to in­crease fund­ing to some area of re­search, even though the ar­gu­ment would seem to cut both ways. One rarely hears in­dig­nant voices protest: “Please do not in­crease our fund­ing. Rather, make some cuts. Re­searchers in other coun­tries will surely pick up the slack; the same work will get done any­way. Don’t squan­der the pub­lic’s trea­sure on do­mes­tic sci­en­tific re­search!”

What ac­counts for this ap­par­ent dou­ble­think? One plau­si­ble ex­pla­na­tion, of course, is that mem­bers of the re­search com­mu­nity have a self-serv­ing bias which leads us to be­lieve that re­search is always good and tempts us to em­brace al­most any ar­gu­ment that sup­ports our de­mand for more fund­ing. How­ever, it is also pos­si­ble that the dou­ble stan­dard can be jus­tified in terms of na­tional self in­ter­est. Sup­pose that the de­vel­op­ment of a tech­nol­ogy has two effects: giv­ing a small benefit B to its in­ven­tors and the coun­try that spon­sors them, while im­pos­ing an ag­gre­gately larger harm H—which could be a risk ex­ter­nal­ity—on ev­ery­body. Even some­body who is largely al­tru­is­tic might then choose to de­velop the over­all harm­ful tech­nol­ogy. They might rea­son that the harm H will re­sult no mat­ter what they do, since if they re­frain some­body else will de­velop the tech­nol­ogy any­way; and given that to­tal welfare can­not be af­fected, they might as well grab the benefit B for them­selves and their na­tion. (“Un­for­tu­nately, there will soon be a de­vice that will de­stroy the world. For­tu­nately, we got the grant to build it!”)” (Bostrom, Su­per­in­tel­li­gence, pp. 228, Chap­ter 14, 2014)