>Today’s cognitive labor may be automated. What about the new cognitive labor that gets created? Both of those things have to be thought of, which is the shifting…
This comment does seem to point to a possible disagreement with the AGI concept. I interpreted some of the other comments a little differently though. For example,
>Your point, if there’s one model that is the only model that’s most broadly deployed in the world and it sees all the data and it does continuous learning, that’s game set match and you stop shop. The reality that at least I see is that in the world today, for all the dominance of any one model, that is not the case. Take coding, there are multiple models. In fact, everyday it’s less the case. There is not one model that is getting deployed broadly. There are multiple models that are getting deployed.
I would say humans are general intelligences, but obviously different humans are good at different things. If we had the ability to cheaply copy people like software, I don’t think we would pick some really smart guy and deploy just him across the economy. I guess what Dwarkesh is saying is that the way continual learning will play out in AIs this will be different because we’ll be able to amalgamate all the stuff AIs learn into a single model, but I don’t think it’s obvious that the way things will play out this will be the most efficient way to do things.
>Today’s cognitive labor may be automated. What about the new cognitive labor that gets created? Both of those things have to be thought of, which is the shifting…
This comment does seem to point to a possible disagreement with the AGI concept. I interpreted some of the other comments a little differently though. For example,
>Your point, if there’s one model that is the only model that’s most broadly deployed in the world and it sees all the data and it does continuous learning, that’s game set match and you stop shop. The reality that at least I see is that in the world today, for all the dominance of any one model, that is not the case. Take coding, there are multiple models. In fact, everyday it’s less the case. There is not one model that is getting deployed broadly. There are multiple models that are getting deployed.
I would say humans are general intelligences, but obviously different humans are good at different things. If we had the ability to cheaply copy people like software, I don’t think we would pick some really smart guy and deploy just him across the economy. I guess what Dwarkesh is saying is that the way continual learning will play out in AIs this will be different because we’ll be able to amalgamate all the stuff AIs learn into a single model, but I don’t think it’s obvious that the way things will play out this will be the most efficient way to do things.