What are your thoughts on the desirability and feasibility of differential technological development (DTD) as a governance strategy for emerging technologies?
For instance, Toby Ord briefly touches on DTD in The Precipice, writing that “While it may be too difficult to prevent the development of a risky technology, we may be able to reduce existential risk by speeding up the development of protective technologies relative to dangerous ones.”
I don’t know much about it beyond that Wikipedia page, but I think that something like this is generally in the right direction.
In particular, I would say:
Technology is not inherently risk-creating or safety-creating. Technology can create safety, when we set safety as a conscious goal.
However, technology is probably risk-creating by default. That is, when our goal is anything other than safety—more power, more speed, more efficiency, more abundance, etc.—then it might create risk as a side effect.
Historically, we have been reactive rather than proactive about technology risk. People die, then we do the root-cause analysis and fix it.
Even when we do anticipate problems, we usually don’t anticipate the right ones. When X-rays were first introduced, people had a moral panic about men seeing through women’s clothing on the street, but no one worried about radiation burns or cancer.
Even when we correctly anticipate problems, we don’t necessarily heed the warnings. At the dawn of the antibiotic age, Alexander Fleming foresaw the problem of resistance, but that didn’t prevent doctors from way overprescribing antibiotics for many years.
We need to get better at all of the above in order to continue to improve safety as we simultaneously pursue other technological goals: more proactive, more accurate at predicting risk, and more disciplined about heeding the risk. (This is obviously so for x-risk, where the reactive approach doesn’t work!)
I see positive signs of this in how the AI and genetics communities are approach safety in their fields. I can’t say whether it’s enough, too much, or just right.
Anyway, DTD seems like a much better concept than the conventional “let’s slow down progress across the board, for safety’s sake.” This is a fundamental error, for reasons David Deutsch describes in The Beginning of Infinity.
But that’s also where I might (I’m not sure) disagree with DTD, depending on how it’s formulated. The reason to accelerate safety-creating technology is not because “it may be too difficult to prevent the development of a risky technology.” It’s because most risky technologies are also extremely valuable, and we don’t want to prevent them. We want them, we just want to have them safely.
What are your thoughts on the desirability and feasibility of differential technological development (DTD) as a governance strategy for emerging technologies?
For instance, Toby Ord briefly touches on DTD in The Precipice, writing that “While it may be too difficult to prevent the development of a risky technology, we may be able to reduce existential risk by speeding up the development of protective technologies relative to dangerous ones.”
I don’t know much about it beyond that Wikipedia page, but I think that something like this is generally in the right direction.
In particular, I would say:
Technology is not inherently risk-creating or safety-creating. Technology can create safety, when we set safety as a conscious goal.
However, technology is probably risk-creating by default. That is, when our goal is anything other than safety—more power, more speed, more efficiency, more abundance, etc.—then it might create risk as a side effect.
Historically, we have been reactive rather than proactive about technology risk. People die, then we do the root-cause analysis and fix it.
Even when we do anticipate problems, we usually don’t anticipate the right ones. When X-rays were first introduced, people had a moral panic about men seeing through women’s clothing on the street, but no one worried about radiation burns or cancer.
Even when we correctly anticipate problems, we don’t necessarily heed the warnings. At the dawn of the antibiotic age, Alexander Fleming foresaw the problem of resistance, but that didn’t prevent doctors from way overprescribing antibiotics for many years.
We need to get better at all of the above in order to continue to improve safety as we simultaneously pursue other technological goals: more proactive, more accurate at predicting risk, and more disciplined about heeding the risk. (This is obviously so for x-risk, where the reactive approach doesn’t work!)
I see positive signs of this in how the AI and genetics communities are approach safety in their fields. I can’t say whether it’s enough, too much, or just right.
Anyway, DTD seems like a much better concept than the conventional “let’s slow down progress across the board, for safety’s sake.” This is a fundamental error, for reasons David Deutsch describes in The Beginning of Infinity.
But that’s also where I might (I’m not sure) disagree with DTD, depending on how it’s formulated. The reason to accelerate safety-creating technology is not because “it may be too difficult to prevent the development of a risky technology.” It’s because most risky technologies are also extremely valuable, and we don’t want to prevent them. We want them, we just want to have them safely.