Unless there’s a theory that said faster computing is the only thing AI lacks now to surpass humam, you mentioned the algorithms now are already enough for superintelligence AI, I wonder if there are articles talking on this
It’s a hypothesis, without a strong consensus. This theory is called “we are not in a hardware overhang”.
The theoretical basis for inability to forecast a principled upper bound on capabilities has I think mostly to do with all the mystery and confusion baked into ML. Before GPUs, the level of expertise that it was forecasted AI engineers would need to get impressive results was higher. And of the many candidate directions to go in, it may have felt kinda random that gradient descent (which you can do with AP coursework in high school, not that I was an AP student) took over.
But I would say it’s much more about the ability to knock down confident assertions that things will top out than about confidence in assertions that things won’t.
It’s a hypothesis, without a strong consensus. This theory is called “we are not in a hardware overhang”.
The theoretical basis for inability to forecast a principled upper bound on capabilities has I think mostly to do with all the mystery and confusion baked into ML. Before GPUs, the level of expertise that it was forecasted AI engineers would need to get impressive results was higher. And of the many candidate directions to go in, it may have felt kinda random that gradient descent (which you can do with AP coursework in high school, not that I was an AP student) took over.
But I would say it’s much more about the ability to knock down confident assertions that things will top out than about confidence in assertions that things won’t.