But it won’t do anything until you ask it to generate a token. At least, that’s my intuition.
I think this seems like mostly a fallacy. (I feel like there should be a post explaning this somewhere.)
Here is an alternative version of what you said to indicate why I don’t think this is a very interesting claim:
Sure you can have a very smart quadriplegic who is very knowledgable. But they won’t do anything until you let them control some actuator.
If your view is that “prediction won’t result in intelligence”, fair enough, though its notable that the human brain seems to heavily utilize prediction objectives.
(folding in replies to different sub-comments here)
Sure you can have a very smart quadriplegic who is very knowledgable. But they won’t do anything until you let them control some actuator.
I think our misunderstanding here is caused by the word do. Sure, Stephen Hawking couldn’t control his limbs, but nevertheless his mind was always working. He kept writing books and papers throughout his life, and his brain was ‘always on’. A transformer model is a set of frozen weights that are only ‘on’ when a prompt is entered. That’s what I mean by ‘it won’t do anything’.
As far as this project, seems extremely implausible to me that the hard part of this project is the scaffolding work I did.
Hmm, maybe we’re differing on what hard works means here! Could be a difference between what’s expensive, time-consuming, etc. I’m not sure this holds for any reasonable scheme, and I definitely think that you deserve a lot of credit for the work you’ve done, much more than GPT4o.
Congrats! I saw that result and am impressed! It’s definitely clearly SOTA on the ARC-AGI-PUB leaderboard, but the original ’34%->50% in 6 days ARC-AGI breakthrough’ claim is still incorrect.
I think this seems like mostly a fallacy. (I feel like there should be a post explaning this somewhere.)
Here is an alternative version of what you said to indicate why I don’t think this is a very interesting claim:
Sure you can have a very smart quadriplegic who is very knowledgable. But they won’t do anything until you let them control some actuator.
If your view is that “prediction won’t result in intelligence”, fair enough, though its notable that the human brain seems to heavily utilize prediction objectives.
(folding in replies to different sub-comments here)
I think our misunderstanding here is caused by the word do. Sure, Stephen Hawking couldn’t control his limbs, but nevertheless his mind was always working. He kept writing books and papers throughout his life, and his brain was ‘always on’. A transformer model is a set of frozen weights that are only ‘on’ when a prompt is entered. That’s what I mean by ‘it won’t do anything’.
Hmm, maybe we’re differing on what hard works means here! Could be a difference between what’s expensive, time-consuming, etc. I’m not sure this holds for any reasonable scheme, and I definitely think that you deserve a lot of credit for the work you’ve done, much more than GPT4o.
Congrats! I saw that result and am impressed! It’s definitely clearly SOTA on the ARC-AGI-PUB leaderboard, but the original ’34%->50% in 6 days ARC-AGI breakthrough’ claim is still incorrect.