To evaluate its editability, we can compare AI code to code, and to the human brain, along various dimensions: storage size, understandability, copyability, etc. (i.e. let’s decompose “complexity” into “storage size” and “understandability” to ensure conceptual clarity)
For size, AI code seems more similar to humans. AI models are already pretty big, so may be around human-sized by the time a hypothetical AI is created.
For understandability, I would expect it to be more like code, than to a human brain. After all, it’s created with a known design and objective that was built intentionally. Even if the learned model has a complex architecture, we should be able to understand its relatively simpler training procedure and incentives.
And then, an AI code will, like ordinary code—and unlike the human brain—be copyable, and have a digital storage medium, which are both potentially critical factors for editing.
Size (i.e. storage complexity) doesn’t seem like a very significant factor here.
I’d guess the editability of AI code would resemble the editability of code moreso than that of a human brain. But even if you don’t agree, I think this points at a better way to analyse the question.
Agree that looking at different dimensions is more fruitful.
I also agree that size isn’t important in itself, but it might correlate with understandability.
I may overall agree with AI code understandability being closer to code than the human brain. But I think you’re maybe a bit quick here: yes, we’ll have a known design and intentional objective on some level. But this level may be quite far removed from “live” cognition. E.g. we may know a lot about developmental psychology or the effects of genes and education, but not a lot about how to modify an adult human brain in order to make specific changes. The situation could be similar from an AI system’s perspective when trying to improve itself.
Copyability does seem like a key difference that’s unlikely to change as AI systems become more advanced. However, I’m not sure if it points to rapid takeoffs as opposed to orthogonal properties. (Though it does if we’re interested in how quickly the total capacity of all AI system grows, and assume hardware overhang plus sufficiently additive capabilities between systems.) To the extent it does, the mechanism seems to be relevantly different from recursive self-improvement—more like “sudden population explosion”.
Well, I guess copyability would help with recursive self-improvement as follows: it allows to run many experiments in parallel that can be used to test the effects of marginal changes.
To evaluate its editability, we can compare AI code to code, and to the human brain, along various dimensions: storage size, understandability, copyability, etc. (i.e. let’s decompose “complexity” into “storage size” and “understandability” to ensure conceptual clarity)
For size, AI code seems more similar to humans. AI models are already pretty big, so may be around human-sized by the time a hypothetical AI is created.
For understandability, I would expect it to be more like code, than to a human brain. After all, it’s created with a known design and objective that was built intentionally. Even if the learned model has a complex architecture, we should be able to understand its relatively simpler training procedure and incentives.
And then, an AI code will, like ordinary code—and unlike the human brain—be copyable, and have a digital storage medium, which are both potentially critical factors for editing.
Size (i.e. storage complexity) doesn’t seem like a very significant factor here.
I’d guess the editability of AI code would resemble the editability of code moreso than that of a human brain. But even if you don’t agree, I think this points at a better way to analyse the question.
Agree that looking at different dimensions is more fruitful.
I also agree that size isn’t important in itself, but it might correlate with understandability.
I may overall agree with AI code understandability being closer to code than the human brain. But I think you’re maybe a bit quick here: yes, we’ll have a known design and intentional objective on some level. But this level may be quite far removed from “live” cognition. E.g. we may know a lot about developmental psychology or the effects of genes and education, but not a lot about how to modify an adult human brain in order to make specific changes. The situation could be similar from an AI system’s perspective when trying to improve itself.
Copyability does seem like a key difference that’s unlikely to change as AI systems become more advanced. However, I’m not sure if it points to rapid takeoffs as opposed to orthogonal properties. (Though it does if we’re interested in how quickly the total capacity of all AI system grows, and assume hardware overhang plus sufficiently additive capabilities between systems.) To the extent it does, the mechanism seems to be relevantly different from recursive self-improvement—more like “sudden population explosion”.
Well, I guess copyability would help with recursive self-improvement as follows: it allows to run many experiments in parallel that can be used to test the effects of marginal changes.