You seem to be missing the possibility of superhuman learning being from superhuman sample efficiency in the sense of requiring less feedback to aquire skills.
Including actively experimenting in usefull directions more efectively.
Nope, we didn’t miss the possibility of AGIs being very sample efficient in their learning. We just don’t think it’s certain, which is why we forecast a number below 100%. Sounds like your estimate is higher than ours; however, that doesn’t mean we missed the possibility.
You seem to be missing the possibility of superhuman learning being from superhuman sample efficiency in the sense of requiring less feedback to aquire skills. Including actively experimenting in usefull directions more efectively.
Nope, we didn’t miss the possibility of AGIs being very sample efficient in their learning. We just don’t think it’s certain, which is why we forecast a number below 100%. Sounds like your estimate is higher than ours; however, that doesn’t mean we missed the possibility.