In addition to not fully understanding the view, I don’t fully understand the discussion in this section or why it’s justifying this probability. It seems like if you had human-level learning (as we are conditioning on from sections 1+3) then things would probably work in <2 years unless parallelization is surprisingly inefficient.
Maybe, but maybe not, which is why we forecast a number below 100%.
For example, it is very very rare to ever see a CEO hired with <2 years of experience, even if they are very intelligent and have read a lot of books and have watched a lot of interviews. Some reasons might be irrational or irrelevant, but surely some of it is real. A CEO job requires a large constellation of skills practiced and refined over many years. E.g., relationship building with customers, suppliers, shareholders, and employees.
For an AGI to be installed as CEO of a corporation in under two years, human-level learning would not be enough—it would need to be superhuman in its ability to learn. Such superhuman learning could come from simulation (e.g., modeling and simulating how a potential human partner would react to various communication styles), come from parallelization (e.g., being installed as a manager in 1,000 companies and then compiling and sharing learnings across copies), or from something else.
I agree that skills learned from reading or thinking or simulating could happen very fast. Skills requiring real-world feedback that is expensive, rare, or long-delayed would progress more slowly.
You seem to be missing the possibility of superhuman learning being from superhuman sample efficiency in the sense of requiring less feedback to aquire skills.
Including actively experimenting in usefull directions more efectively.
Nope, we didn’t miss the possibility of AGIs being very sample efficient in their learning. We just don’t think it’s certain, which is why we forecast a number below 100%. Sounds like your estimate is higher than ours; however, that doesn’t mean we missed the possibility.
Maybe, but maybe not, which is why we forecast a number below 100%.
For example, it is very very rare to ever see a CEO hired with <2 years of experience, even if they are very intelligent and have read a lot of books and have watched a lot of interviews. Some reasons might be irrational or irrelevant, but surely some of it is real. A CEO job requires a large constellation of skills practiced and refined over many years. E.g., relationship building with customers, suppliers, shareholders, and employees.
For an AGI to be installed as CEO of a corporation in under two years, human-level learning would not be enough—it would need to be superhuman in its ability to learn. Such superhuman learning could come from simulation (e.g., modeling and simulating how a potential human partner would react to various communication styles), come from parallelization (e.g., being installed as a manager in 1,000 companies and then compiling and sharing learnings across copies), or from something else.
I agree that skills learned from reading or thinking or simulating could happen very fast. Skills requiring real-world feedback that is expensive, rare, or long-delayed would progress more slowly.
You seem to be missing the possibility of superhuman learning being from superhuman sample efficiency in the sense of requiring less feedback to aquire skills. Including actively experimenting in usefull directions more efectively.
Nope, we didn’t miss the possibility of AGIs being very sample efficient in their learning. We just don’t think it’s certain, which is why we forecast a number below 100%. Sounds like your estimate is higher than ours; however, that doesn’t mean we missed the possibility.