You could argue this [Github CoPilot] is “AIs improving AI research” in some basic sense, but it definitely doesn’t represent an intelligence explosion.
I don’t think Yudkowsky is saying “once AI helps with AI research [beyond zero help] we’ll have an intelligence explosion”
Example non explosion: If the AI can help you make the research 10% better. Intelligence 10 will become 11 and then ~11.1 and then ~11.11, which won’t explode to infinity.
I think he’s saying the factor (which is 10% in my example) needs to be big enough for the numbers to go to infinity.
Example explosion: If the factor was 100%, then intelligence 10 would become 20 would become 40, 80, 160, …. and so on until infinity.
At least this is EY’s argument as I understand it.
And yep I agree Yudkowsky doesn’t seem to be saying this, because it doesn’t really represent a phase change of positive feedback cycles of intelligence, which is what he expects to happen in a hard takeoff.
I think more of the actual mathematical models he uses when discussing takeoff speeds can be found in his Intelligence Explosion Microeconomics paper. I haven’t read it in detail, but my general impression of this paper (and how it’s seen by others in the field) is that it successfully manages to make strong statements about the nature of intelligence and what it implies for takeoff speeds without relying on reference classes, but that it’s (a) not particularly accessible, and (b) not very in-touch with the modern deep learning paradigm (largely because of an over-reliance on the concept of recursive self-improvement, that now doesn’t seem like it will pan out the way it was originally expected to).
I like this post! It feels accessible to me
Commenting on a specific piece I just read:
I don’t think Yudkowsky is saying “once AI helps with AI research [beyond zero help] we’ll have an intelligence explosion”
Example non explosion: If the AI can help you make the research 10% better. Intelligence 10 will become 11 and then ~11.1 and then ~11.11, which won’t explode to infinity.
I think he’s saying the factor (which is 10% in my example) needs to be big enough for the numbers to go to infinity.
Example explosion: If the factor was 100%, then intelligence 10 would become 20 would become 40, 80, 160, …. and so on until infinity.
At least this is EY’s argument as I understand it.
(Also, it seems correct to me)
I don’t follow your example. What are the functions that take intelligence → research quality and research quality → intelligence?
If I had to guess, I’d say the first example should give intelligence 10 → 11 → 12.1 → 13.31 → … which is still a divergent sequence.
Yep, I’m wrong.
I was thinking about a series like 1 + 1⁄2 + 1⁄4 + 1⁄8 … ~= 2
Which is something else
Thanks, I really appreciate your comment!
And yep I agree Yudkowsky doesn’t seem to be saying this, because it doesn’t really represent a phase change of positive feedback cycles of intelligence, which is what he expects to happen in a hard takeoff.
I think more of the actual mathematical models he uses when discussing takeoff speeds can be found in his Intelligence Explosion Microeconomics paper. I haven’t read it in detail, but my general impression of this paper (and how it’s seen by others in the field) is that it successfully manages to make strong statements about the nature of intelligence and what it implies for takeoff speeds without relying on reference classes, but that it’s (a) not particularly accessible, and (b) not very in-touch with the modern deep learning paradigm (largely because of an over-reliance on the concept of recursive self-improvement, that now doesn’t seem like it will pan out the way it was originally expected to).