Well, I also think that the core argument is not really valid. Engagement does not require conceding that the other person is right.
The way I understand it, the core of the argument is that AI fears are based on taking a pseudo-trait like “intelligence” and extrapolating it to a “super” regime. The author claims that this is philosophical nonsense and thus there’s nothing to worry about. I reject that AI fears are based on those pseudo-traits.
AI risk is not in principle about intelligence or agency. A sufficient amount of brute-force search is enough to be catastrophic. An example of this is the “Outcome Pump”. But if you want a less exotic example, consider evolution. Evolution is not sentient, not intelligent, and not an agent (unless your definition of those is very broad). And yet, evolution from time to time makes human civilization stumble by coming up with deadly, contagious viruses.
Now, viruses evolve to make more copies of themselves, so it is quite unlikely that an evolved virus will kill 100% of the population. But if virus evolution didn’t have that life-preserving property, and if it happened 1000 times faster, then we would all die within months.
The analogy with AI is: suppose we spend 10^100000 FLOPs on a brute force search for industrial robot designs. We simulate the effects of different designs on the current world and pick the one whose effects are closest to out target goal. The final designs will be exceedingly good at whatever the target of the search is, including at convincing us that we should actually build the robots. Basically, the moment someone sees those designs, humanity will have lost some control over their future. In the same way that, once SARS-CoV-2 entered a single human body, the future of humanity suddenly became much more dependent on our pandemic response.
In practice we don’t have that much computational power. That’s why intelligence becomes a necessary component of this, because intelligence vastly reduces the search space. Note that this is not some “pseudo-trait” built on human psychology. This is intelligence in the sense of compression: how many bits of evidence you need to complete a search. It is a well-defined concept with clear properties.
Current AIs are not very intelligent by this measure. And maybe they’ll be. Maybe it would take some paradigm different from Deep Learning to achieve this level of intelligence. That is an empirical question that we’ll need to solve. But at no point does SIILTBness play any role in this.
Sufficiently powerful search is dangerous even if there’s nothing like it is to be a search process. And ‘powerful’ here is a measure of how many states you visit and how efficiently you do it. Evolution itself is a testament to the power of search. It is not philosophical non-sense, but the most powerful force on Earth for billions of years.
(Note: the version of AI risk I have explored here is a particularly ‘hard’ version, associated with the people who are most pessimistic about AI, notably MIRI. There are other versions that do rest on something like agency or intelligence)
The objection that I thought was valid is that current generative AIs might not be that dangerous. But the author himself acknowledges that training situated and embodied AIs could be dangerous, and it seems clear that the economic incentives to build that kind of AI are strong enough that it will happen eventually (and we are already training AIs on virtual environments such as Minecraft. Is that situated and embodied enough?)
Well, I also think that the core argument is not really valid. Engagement does not require conceding that the other person is right.
The way I understand it, the core of the argument is that AI fears are based on taking a pseudo-trait like “intelligence” and extrapolating it to a “super” regime. The author claims that this is philosophical nonsense and thus there’s nothing to worry about. I reject that AI fears are based on those pseudo-traits.
AI risk is not in principle about intelligence or agency. A sufficient amount of brute-force search is enough to be catastrophic. An example of this is the “Outcome Pump”. But if you want a less exotic example, consider evolution. Evolution is not sentient, not intelligent, and not an agent (unless your definition of those is very broad). And yet, evolution from time to time makes human civilization stumble by coming up with deadly, contagious viruses.
Now, viruses evolve to make more copies of themselves, so it is quite unlikely that an evolved virus will kill 100% of the population. But if virus evolution didn’t have that life-preserving property, and if it happened 1000 times faster, then we would all die within months.
The analogy with AI is: suppose we spend 10^100000 FLOPs on a brute force search for industrial robot designs. We simulate the effects of different designs on the current world and pick the one whose effects are closest to out target goal. The final designs will be exceedingly good at whatever the target of the search is, including at convincing us that we should actually build the robots. Basically, the moment someone sees those designs, humanity will have lost some control over their future. In the same way that, once SARS-CoV-2 entered a single human body, the future of humanity suddenly became much more dependent on our pandemic response.
In practice we don’t have that much computational power. That’s why intelligence becomes a necessary component of this, because intelligence vastly reduces the search space. Note that this is not some “pseudo-trait” built on human psychology. This is intelligence in the sense of compression: how many bits of evidence you need to complete a search. It is a well-defined concept with clear properties.
Current AIs are not very intelligent by this measure. And maybe they’ll be. Maybe it would take some paradigm different from Deep Learning to achieve this level of intelligence. That is an empirical question that we’ll need to solve. But at no point does SIILTBness play any role in this.
Sufficiently powerful search is dangerous even if there’s nothing like it is to be a search process. And ‘powerful’ here is a measure of how many states you visit and how efficiently you do it. Evolution itself is a testament to the power of search. It is not philosophical non-sense, but the most powerful force on Earth for billions of years.
(Note: the version of AI risk I have explored here is a particularly ‘hard’ version, associated with the people who are most pessimistic about AI, notably MIRI. There are other versions that do rest on something like agency or intelligence)
The objection that I thought was valid is that current generative AIs might not be that dangerous. But the author himself acknowledges that training situated and embodied AIs could be dangerous, and it seems clear that the economic incentives to build that kind of AI are strong enough that it will happen eventually (and we are already training AIs on virtual environments such as Minecraft. Is that situated and embodied enough?)