Language models have no problem interpreting the image correctly. You can ask them for a description of the input grid and they’ll get it right, they just don’t get the pattern.
I wouldn’t be surprised if that’s correct (though I haven’t seen the tests), but that wasn’t my complaint. A moderately smart/trained human can also probably convert from JSON to a description of the grid, but there’s a substantial difference in experience from seeing even a list of grid square-color labels vs. actually visualizing it and identifying the patterns. I would strike a guess that humans who are only given a list of square color labels (not just the raw JSON) would perform significantly worse if they are not allowed to then draw out the grids.
And I would guess that even if some people do it well, they are doing it well because they convert from text to visualization.
I might be misunderstanding you here. You can easily get ChatGPT to convert the image to a grid representation/visualization, e.g. in Python, not just a list of square-color labels. It can formally draw out the grid any way you want and work with that, but still doesn’t make progress.
Also, to answer your initial question about ARC’s usefulness, the idea is just that these are simple problems where relevant solution strategies don’t exist on the internet. A non-visual ARC analog might be, as Chollet mentioned, Caesar ciphers with non-standard offsets.
Just because an LLM can convert something to a grid representation/visualization does not mean it can itself actually “visualize” the thing. A pure-text model will lack the ability to observe anything visually. Just because a blind human can write out some mathematical function that they can input into a graphing calculator, that does not mean that the human necessarily can visualize what the function’s shape will take, even if the resulting graph is shown to everyone else.
I used GPT-4o which is multimodal (and in fact was even trained on these images in particular as I took the examples from the ARC website, not the Github). I did test more grid inputs and it wasn’t perfect at ‘visualizing’ them.
I almost clarified that I know some models technically are multi-modal, but my impression is that the visual reasoning abilities of the current models are very limited, so I’m not at all surprised they’re limited. Among other illustrations of this impression, occasionally I’ve found they struggle to properly describe what is happening in an image beyond a relatively general level.
Looking forward to seeing the ARC performance of future multimodal models. I’m also going to try to think of a text-based ARC analog, that is perhaps more general. There are only so many unique simple 2D-grid transformation rules so it can be brute forced to some extent.
Language models have no problem interpreting the image correctly. You can ask them for a description of the input grid and they’ll get it right, they just don’t get the pattern.
I wouldn’t be surprised if that’s correct (though I haven’t seen the tests), but that wasn’t my complaint. A moderately smart/trained human can also probably convert from JSON to a description of the grid, but there’s a substantial difference in experience from seeing even a list of grid square-color labels vs. actually visualizing it and identifying the patterns. I would strike a guess that humans who are only given a list of square color labels (not just the raw JSON) would perform significantly worse if they are not allowed to then draw out the grids.
And I would guess that even if some people do it well, they are doing it well because they convert from text to visualization.
I might be misunderstanding you here. You can easily get ChatGPT to convert the image to a grid representation/visualization, e.g. in Python, not just a list of square-color labels. It can formally draw out the grid any way you want and work with that, but still doesn’t make progress.
Also, to answer your initial question about ARC’s usefulness, the idea is just that these are simple problems where relevant solution strategies don’t exist on the internet. A non-visual ARC analog might be, as Chollet mentioned, Caesar ciphers with non-standard offsets.
Just because an LLM can convert something to a grid representation/visualization does not mean it can itself actually “visualize” the thing. A pure-text model will lack the ability to observe anything visually. Just because a blind human can write out some mathematical function that they can input into a graphing calculator, that does not mean that the human necessarily can visualize what the function’s shape will take, even if the resulting graph is shown to everyone else.
I used GPT-4o which is multimodal (and in fact was even trained on these images in particular as I took the examples from the ARC website, not the Github). I did test more grid inputs and it wasn’t perfect at ‘visualizing’ them.
I almost clarified that I know some models technically are multi-modal, but my impression is that the visual reasoning abilities of the current models are very limited, so I’m not at all surprised they’re limited. Among other illustrations of this impression, occasionally I’ve found they struggle to properly describe what is happening in an image beyond a relatively general level.
Looking forward to seeing the ARC performance of future multimodal models. I’m also going to try to think of a text-based ARC analog, that is perhaps more general. There are only so many unique simple 2D-grid transformation rules so it can be brute forced to some extent.