âWhat we do have the power to affect (to what extent depends on how we define âweâ) is the rate of development of various technologies and potentially the sequence in which feasible technologies are developed and implemented. Our focus should be on what I want to call differential technological development: trying to retard the implementation of dangerous technologies and accelerate implementation of beneficial technologies, especially those that ameliorate the hazards posed by other technologies.â
An idea that seems as good and obvious as utilitarianism.
I agree that the idea seems good and obvious, in a sense. But beware hindsight bias and the difficulty of locating the hypothesis. I.e., something that seems obvious (once you hear it) can be very well worth saying. I think itâs apparent that most of the people making decisions about technological development (funders, universities, scientists, politicians, etc.) are not thinking in terms of the principle of differential technological development.
Sometimes they seem to sort-of approximate the principle, in effect, but on closer inspection the principle would still offer them value (in my view).
E.g., concerns are raised about certain biological research with âdual useâ potential, such as gain of function research, and people do call for some of that to be avoided, or done carefully, or the results released less widely. But even then, the conversation seems to focus almost entirely on whether this research is net beneficial, even slightly, rather than simultaneously also asking âHey, what if we didnât just try to avoid increasing risks, but also tried to direct more resources to decreasing risks?â Rob Wiblin made a relevant point (and then immediate self-counterpoint, as is his wont) on the latest 80k episode:
If you really canât tell the sign, if youâre just super unconfident about it, then it doesnât seem like itâs probably a top priority project. If youâre just unsure whether this is good or bad for the world, I donât know, why donât you find something thatâs good? That youâre confident is good. I suppose youâd be like, âWell, itâs a 55-45 scenario, but the 55 would be so valuable.â I donât know.
Having said that, I feel I should also re-emphasise that the principle is not a blanket argument against technological development; itâs more like highlighting and questioning a blanket assumption often implicitly made the other direction. As Bostrom writes in a later paper:
Technology policy should not unquestioningly assume that all technological progress is beneficial, or that complete scientific openness is always best, or that the world has the capacity to manage any potential downside of a technology after it is invented. [emphasis added]
Patternâs comment goes on to say:
But what if these things come in cycles? Technology A may be both positive and negative, but technology B which negates its harms is based on A. Slowing down tech development seems good before A arrives, but bad after. (This scenario implicitly requires that the poison has to be invented before the cure.)
I think this is true. I think it could be fit into the differential technological development framework, as we could say that Technology A, which appears âin itselfâ risk-increasing, is at least less so than we thought, and is perhaps risk-reducing on net, if we also consider how it facilitates the development of Technology B. But thatâs not obvious or highlighted in the original formulation of the differential technological development principle.
Justin, also of Convergence, recently wrote a post something very relevant to this point, which you may be interested in.
On a different post of mine on LW, which quoted this one, Pattern commented:
I agree that the idea seems good and obvious, in a sense. But beware hindsight bias and the difficulty of locating the hypothesis. I.e., something that seems obvious (once you hear it) can be very well worth saying. I think itâs apparent that most of the people making decisions about technological development (funders, universities, scientists, politicians, etc.) are not thinking in terms of the principle of differential technological development.
Sometimes they seem to sort-of approximate the principle, in effect, but on closer inspection the principle would still offer them value (in my view).
E.g., concerns are raised about certain biological research with âdual useâ potential, such as gain of function research, and people do call for some of that to be avoided, or done carefully, or the results released less widely. But even then, the conversation seems to focus almost entirely on whether this research is net beneficial, even slightly, rather than simultaneously also asking âHey, what if we didnât just try to avoid increasing risks, but also tried to direct more resources to decreasing risks?â Rob Wiblin made a relevant point (and then immediate self-counterpoint, as is his wont) on the latest 80k episode:
Having said that, I feel I should also re-emphasise that the principle is not a blanket argument against technological development; itâs more like highlighting and questioning a blanket assumption often implicitly made the other direction. As Bostrom writes in a later paper:
Patternâs comment goes on to say:
I think this is true. I think it could be fit into the differential technological development framework, as we could say that Technology A, which appears âin itselfâ risk-increasing, is at least less so than we thought, and is perhaps risk-reducing on net, if we also consider how it facilitates the development of Technology B. But thatâs not obvious or highlighted in the original formulation of the differential technological development principle.
Justin, also of Convergence, recently wrote a post something very relevant to this point, which you may be interested in.