And yep I agree Yudkowsky doesn’t seem to be saying this, because it doesn’t really represent a phase change of positive feedback cycles of intelligence, which is what he expects to happen in a hard takeoff.
I think more of the actual mathematical models he uses when discussing takeoff speeds can be found in his Intelligence Explosion Microeconomics paper. I haven’t read it in detail, but my general impression of this paper (and how it’s seen by others in the field) is that it successfully manages to make strong statements about the nature of intelligence and what it implies for takeoff speeds without relying on reference classes, but that it’s (a) not particularly accessible, and (b) not very in-touch with the modern deep learning paradigm (largely because of an over-reliance on the concept of recursive self-improvement, that now doesn’t seem like it will pan out the way it was originally expected to).
Thanks, I really appreciate your comment!
And yep I agree Yudkowsky doesn’t seem to be saying this, because it doesn’t really represent a phase change of positive feedback cycles of intelligence, which is what he expects to happen in a hard takeoff.
I think more of the actual mathematical models he uses when discussing takeoff speeds can be found in his Intelligence Explosion Microeconomics paper. I haven’t read it in detail, but my general impression of this paper (and how it’s seen by others in the field) is that it successfully manages to make strong statements about the nature of intelligence and what it implies for takeoff speeds without relying on reference classes, but that it’s (a) not particularly accessible, and (b) not very in-touch with the modern deep learning paradigm (largely because of an over-reliance on the concept of recursive self-improvement, that now doesn’t seem like it will pan out the way it was originally expected to).