It doesn’t look like it to me. Here are a few technologies which I’d guess have substantial economic value, where research progress or uptake appears to be drastically slower than it could be, for reasons of concern about safety or ethics
Huge amounts of medical research, including really important medical research e.g. The FDA banned human trials of strep A vaccines from the 70s to the 2000s, in spite of 500,000 global deaths every year. A lot of people also died while covid vaccines went through all the proper trials.
Nuclear energy
Fracking
Various genetics things: genetic modification of foods, gene drives, early recombinant DNA researchers famously organized a moratorium and then ongoing research guidelines including prohibition of certain experiments (see the Asilomar Conference)
Nuclear, biological, and maybe chemical weapons (or maybe these just aren’t useful)
Various human reproductive innovation: cloning of humans, genetic manipulation of humans (a notable example of an economically valuable technology that is to my knowledge barely pursued across different countries, without explicit coordination between those countries, even though it would make those countries more competitive. Someone used CRISPR on babies in China, but was imprisoned for it.)
So do any of these not exist in some form proving the tech is real?
Do any of these not have real world data of the drawbacks?
Note for example human genetic engineering, we never tried that but when we edit mammals errors are common and we know the incredible cost of birth defects. Also since we edit other mammals we know 100 percent this isn’t just a possibility down the road, it’s real, same tooling that works on rats will work on humans.
Do any of these, if you researched the tech and developed it to it’s full potential, allow you to invade every country on earth and/or threaten to kill every living person within that countrys borders?
Note those last paragraph. This is not me being edgy. If exactly one nation has a large nuclear arsenal they could do exactly that. Once other powers started to get nukes in the 1950s, every nation had to get their own or have trusted friends with nukes and a treaty protecting them.
AGI technology if developed to it’s full potential, and kept exclusive to one superpower, would allow the same.
It means any multilateral agreement to stop agi research because of dangers that haven’t been shown yet to exist means each party is throwing away all the benefits, like becoming incredibly wealthy, immortal, and having almost limitless automated military hardware.
So much incentive to cheat it seems like a non starter.
For an agreement to be even possible I think there would need to be evidence that
(1) it’s too dangerous to even experiment with AGI systems because it could “break out and kill everyone”. Break out to where? What computers can host it? Where are they? How does the AGI stop humans cutting the power or bombing the racks of interconnected H100s?
(2) There’s no point in building models, benchmarking them, finding the ones that are safe to use and also AGI, and using them, they will all betray you.
It’s possible that (1,2) are true facts of reality but there is no actual evidence.
Re human genetic engineering, I don’t think it’s data on errors that is preventing it happening, it’s moral disgust of eugenics. We could similarly have a taboo against AGI if enough people are scared enough of, and disgusted by, the idea of a digital alien species that doesn’t share our values taking over and destroying all biological life.
Perhaps. I can’t really engage on that because “moral disgust” doesn’t explain multiple distinct nations with slightly different views on morality all refusing to practice it. My main comment is I think it’s helpful to look at the potential gain vs potential risks.
Potential gain : yes you could identify alleles with promoters associated with the nervous system and statistically correlated with higher IQ. 20-30 years later, if this were done on large scales, people might be marginally smarter.
But how much gain is this? How long has it been since humans had the tools to even attempt genetic engineering. Can you project any gain whatsoever 20-30 years from now?
I would argue the answers are minimal, less than 10 years since reliable tools existed, and almost no gain, because in 20-30 years any task that “average or below” IQ individuals struggle with, AI tools will be able to complete in seconds.
Potential risks : each editing error undetected in early fetal development saddles the individual human with lifelong birth defects which may require a permanent caretaker or hospitalization. This can cost many millions, and is essentially so much liability that only a government could afford to practice genetic engineering.
Governments are slow, remember it’s only been about 10 years since it has been even feasible.
Conclusion: the risk to benefits ratio for human genetic engineering offers minimal gain and even in the best case is slow, which means little annual roi. The decision to say “industrialize China” has generated more wealth than all of humanity had prior to that point, and took only slightly longer than one iteration of human genetic engineering.
The potential benefits for AI could double the wealth of humanity in a few years, aka 10-100 percent annual ROI.
There are universal human psychological adaptations associated with moral disgust, so it’s not that hard for ‘moral disgust’ to explain broad moral consensus across very different cultures. For example, murder and rape within societies are almost always considered morally disgusting, across cultures, according to the anthropological research.
It’s not that big a stretch to imagine that a global consensus could be developed that leverages these moral disgust instincts to stigmatize reckless AI development. As I argued here.
Ok, so some societies have much higher murder rates than others. Some locations, the local police de facto make murder between gang members legal, by accepting low bribes and putting minimal effort into investigation.
The issue is runaway differential utility. The few examples of human technology not exploited do not have runaway utility. They have small payoffs delayed far into the future and large costs, and making even a small mistake makes the payoff negative.
Examples : genetic engineering, human medicine, nuclear power. Small payoffs and it’s negative on the smallest error.
AI is different. It appears to have immediate more than 100 percent annual payoff. OpenAIs revenue on a model they state cost 68 million to train is about 1 billion USD a month. Assuming 10 percent profit margin (the rest pays for compute) that’s over 100 percent annual ROI.
So a society that has less moral disgust towards AI would get richer. They spend their profits on buying more AI hardware and more research. Over time they own a larger and larger fraction of all assets and revenue on earth. This is why EMH forces companies towards optimal strategies, because over time the ones that fail to do so fail financially. (they fail when their cost of production becomes greater than the market price for a product. Example: Sear. Sears failed to modernize its logistics chain so eventually it’s cost to deliver retail goods exceeds the market price for those goods).
Moreover, other societies, forced to compete, have to drop some of their moral disgust and I suspect this scenario ends up like a ratchet, where inevitably a society will lose 100 percent of all disgust in order to compete.
Pauses, multilateral agreements, etc can slow this down but it depends on how fast the gain is as to how long it buys you. Unilateral agreements just free tsmc up to manufacture AI chips for the parties not signing the agreement.
OK, that sounds somewhat plausible, in the abstract.
But what would be your proposal to slow down and reduce extinction risk from AI development? Or do you think that risk is so low that it’s not worth trying to manage it?
My proposal is to engineer powerful and reliable AI immediately, as fast as feasible. If this is true endgame—whoever wins the race owns the planet if not the accessible universe—then spending and effort should be proportional. It’s the only way.
You deal with the dangerous out of control AI by tasking your reliable models with destroying them.
The core of your approach is to subdivide and validate all the subtasks. No model is manufacturing the drones used to do this by itself, it’s thousands of temporary instances. You filter the information used to reach the combat solvers that decide how to task each drone to destroy the enemy so any begging from the enemy is never processed. You design the killer drones with lots of low level interlocks to prevent the obvious misuse and they would use controllers maybe using conventional software so they cannot be convinced not to carry out the mission as they can’t understand language.
The general concept is if 99 percent of the drones are “safe” like this then even if escaped models are smart they just can’t win.
Or in more concrete terms, I am saying say a simple reliable combat solver is not going to be a lot worse than a more complex one. That superintelligence saturates. Simple and reliable hypersonic stealth drones are still almost as good as whatever a superintelligence cooks up etc. It’s an assumption on available utility relative to compute.
Here are some other technologies that got slowed down, cf @Katja_Grace
Longer/ongoing list here.
So do any of these not exist in some form proving the tech is real?
Do any of these not have real world data of the drawbacks?
Note for example human genetic engineering, we never tried that but when we edit mammals errors are common and we know the incredible cost of birth defects. Also since we edit other mammals we know 100 percent this isn’t just a possibility down the road, it’s real, same tooling that works on rats will work on humans.
Do any of these, if you researched the tech and developed it to it’s full potential, allow you to invade every country on earth and/or threaten to kill every living person within that countrys borders?
Note those last paragraph. This is not me being edgy. If exactly one nation has a large nuclear arsenal they could do exactly that. Once other powers started to get nukes in the 1950s, every nation had to get their own or have trusted friends with nukes and a treaty protecting them.
AGI technology if developed to it’s full potential, and kept exclusive to one superpower, would allow the same.
It means any multilateral agreement to stop agi research because of dangers that haven’t been shown yet to exist means each party is throwing away all the benefits, like becoming incredibly wealthy, immortal, and having almost limitless automated military hardware.
So much incentive to cheat it seems like a non starter.
For an agreement to be even possible I think there would need to be evidence that
(1) it’s too dangerous to even experiment with AGI systems because it could “break out and kill everyone”. Break out to where? What computers can host it? Where are they? How does the AGI stop humans cutting the power or bombing the racks of interconnected H100s?
(2) There’s no point in building models, benchmarking them, finding the ones that are safe to use and also AGI, and using them, they will all betray you.
It’s possible that (1,2) are true facts of reality but there is no actual evidence.
Re human genetic engineering, I don’t think it’s data on errors that is preventing it happening, it’s moral disgust of eugenics. We could similarly have a taboo against AGI if enough people are scared enough of, and disgusted by, the idea of a digital alien species that doesn’t share our values taking over and destroying all biological life.
Perhaps. I can’t really engage on that because “moral disgust” doesn’t explain multiple distinct nations with slightly different views on morality all refusing to practice it. My main comment is I think it’s helpful to look at the potential gain vs potential risks.
Potential gain : yes you could identify alleles with promoters associated with the nervous system and statistically correlated with higher IQ. 20-30 years later, if this were done on large scales, people might be marginally smarter.
But how much gain is this? How long has it been since humans had the tools to even attempt genetic engineering. Can you project any gain whatsoever 20-30 years from now?
I would argue the answers are minimal, less than 10 years since reliable tools existed, and almost no gain, because in 20-30 years any task that “average or below” IQ individuals struggle with, AI tools will be able to complete in seconds.
Potential risks : each editing error undetected in early fetal development saddles the individual human with lifelong birth defects which may require a permanent caretaker or hospitalization. This can cost many millions, and is essentially so much liability that only a government could afford to practice genetic engineering.
Governments are slow, remember it’s only been about 10 years since it has been even feasible.
Conclusion: the risk to benefits ratio for human genetic engineering offers minimal gain and even in the best case is slow, which means little annual roi. The decision to say “industrialize China” has generated more wealth than all of humanity had prior to that point, and took only slightly longer than one iteration of human genetic engineering.
The potential benefits for AI could double the wealth of humanity in a few years, aka 10-100 percent annual ROI.
There are universal human psychological adaptations associated with moral disgust, so it’s not that hard for ‘moral disgust’ to explain broad moral consensus across very different cultures. For example, murder and rape within societies are almost always considered morally disgusting, across cultures, according to the anthropological research.
It’s not that big a stretch to imagine that a global consensus could be developed that leverages these moral disgust instincts to stigmatize reckless AI development. As I argued here.
Ok, so some societies have much higher murder rates than others. Some locations, the local police de facto make murder between gang members legal, by accepting low bribes and putting minimal effort into investigation.
The issue is runaway differential utility. The few examples of human technology not exploited do not have runaway utility. They have small payoffs delayed far into the future and large costs, and making even a small mistake makes the payoff negative.
Examples : genetic engineering, human medicine, nuclear power. Small payoffs and it’s negative on the smallest error.
AI is different. It appears to have immediate more than 100 percent annual payoff. OpenAIs revenue on a model they state cost 68 million to train is about 1 billion USD a month. Assuming 10 percent profit margin (the rest pays for compute) that’s over 100 percent annual ROI.
So a society that has less moral disgust towards AI would get richer. They spend their profits on buying more AI hardware and more research. Over time they own a larger and larger fraction of all assets and revenue on earth. This is why EMH forces companies towards optimal strategies, because over time the ones that fail to do so fail financially. (they fail when their cost of production becomes greater than the market price for a product. Example: Sear. Sears failed to modernize its logistics chain so eventually it’s cost to deliver retail goods exceeds the market price for those goods).
Moreover, other societies, forced to compete, have to drop some of their moral disgust and I suspect this scenario ends up like a ratchet, where inevitably a society will lose 100 percent of all disgust in order to compete.
Pauses, multilateral agreements, etc can slow this down but it depends on how fast the gain is as to how long it buys you. Unilateral agreements just free tsmc up to manufacture AI chips for the parties not signing the agreement.
OK, that sounds somewhat plausible, in the abstract.
But what would be your proposal to slow down and reduce extinction risk from AI development? Or do you think that risk is so low that it’s not worth trying to manage it?
My proposal is to engineer powerful and reliable AI immediately, as fast as feasible. If this is true endgame—whoever wins the race owns the planet if not the accessible universe—then spending and effort should be proportional. It’s the only way.
You deal with the dangerous out of control AI by tasking your reliable models with destroying them.
The core of your approach is to subdivide and validate all the subtasks. No model is manufacturing the drones used to do this by itself, it’s thousands of temporary instances. You filter the information used to reach the combat solvers that decide how to task each drone to destroy the enemy so any begging from the enemy is never processed. You design the killer drones with lots of low level interlocks to prevent the obvious misuse and they would use controllers maybe using conventional software so they cannot be convinced not to carry out the mission as they can’t understand language.
The general concept is if 99 percent of the drones are “safe” like this then even if escaped models are smart they just can’t win.
Or in more concrete terms, I am saying say a simple reliable combat solver is not going to be a lot worse than a more complex one. That superintelligence saturates. Simple and reliable hypersonic stealth drones are still almost as good as whatever a superintelligence cooks up etc. It’s an assumption on available utility relative to compute.