But of course, I cannot justify high confidence in these views given that many experts disagree. Following the analysis of this post, this is
Dangling sentence.
In my personal belief, the “hard AI takeoff” scenarios are driven mostly by the belief that current AI progress largely flows from a single skill, that is, “mathematics/programming”. So while AI will continue to develop at disparate rates and achieve superhuman performance in different areas at different rates, an ASI takeoff will be driven almost entirely by AI performance in software development, and once AI becomes superhuman in this skill it will rapidly become superhuman in all skills. This seems obvious to me, and I think disagreements with it have to rest largely with hidden difficulties in “software development”, such as understanding and modeling many different systems well enough to develop algorithms specialized for them (which seems like it’s almost circularly “AGI complete”).
disagreements with it have to rest largely with hidden difficulties in “software development”, such as understanding and modeling many different systems well enough to develop algorithms specialized for them (which seems like it’s almost circularly “AGI complete”).
What do you make of that objection? (I agree with it. I think programming efficiently and flexibly across problem domains is probably AGI-complete.)
My 2 cents: math/ programming is only half the battle. Here’s an analogy—you could be the best programmer in the world, but if you don’t understand chess, you can’t program a computer to beat a human at chess, and if you don’t understand quantum physics, you can’t program a computer to simulate matter at the atomic scale (well, not using ab initio methods anyway).
In order to get an intelligence explosion, a computer would have to not only have great programming skills, but also really understand intelligence. And intelligence isn’t just one thing—it’s a bunch of things (creativity, memory, planning, social skills, emotional skills etc and these can be subdivided further into different fields like physics, design, social understanding, social manipulation etc). I find it hard to believe that the same computer would go from not superhuman to superhuman in almost all of these all at once. Obviously computers outcompete humans in many of these already, but I think even on the more “human” traits and in areas where computer act more like agents than just like tools, it’s still more likely to happen in several waves instead of just one takeoff.
Dangling sentence.
In my personal belief, the “hard AI takeoff” scenarios are driven mostly by the belief that current AI progress largely flows from a single skill, that is, “mathematics/programming”. So while AI will continue to develop at disparate rates and achieve superhuman performance in different areas at different rates, an ASI takeoff will be driven almost entirely by AI performance in software development, and once AI becomes superhuman in this skill it will rapidly become superhuman in all skills. This seems obvious to me, and I think disagreements with it have to rest largely with hidden difficulties in “software development”, such as understanding and modeling many different systems well enough to develop algorithms specialized for them (which seems like it’s almost circularly “AGI complete”).
What do you make of that objection? (I agree with it. I think programming efficiently and flexibly across problem domains is probably AGI-complete.)
My 2 cents: math/ programming is only half the battle. Here’s an analogy—you could be the best programmer in the world, but if you don’t understand chess, you can’t program a computer to beat a human at chess, and if you don’t understand quantum physics, you can’t program a computer to simulate matter at the atomic scale (well, not using ab initio methods anyway).
In order to get an intelligence explosion, a computer would have to not only have great programming skills, but also really understand intelligence. And intelligence isn’t just one thing—it’s a bunch of things (creativity, memory, planning, social skills, emotional skills etc and these can be subdivided further into different fields like physics, design, social understanding, social manipulation etc). I find it hard to believe that the same computer would go from not superhuman to superhuman in almost all of these all at once. Obviously computers outcompete humans in many of these already, but I think even on the more “human” traits and in areas where computer act more like agents than just like tools, it’s still more likely to happen in several waves instead of just one takeoff.
Does it mean that we could try to control AI by preventing its to know anything about programming?
And on the other side, any AI which is able to write code should be regarded extremely dangerous, no matter how low its abilities in other domains?