I’m not sure how widespread that view is, but I think that it is likely mistaken. (And I guess the view is most likely linked to thinking about language models without scaffolding built on top of them?)
Moreover from a safety perspective, I think it’s pretty important that agents built via scaffolding on top of language models have a strong natural transparency, which may make it one of the most desirable possible regimes to obtain general intelligence with.
I’m not sure how widespread that view is, but I think that it is likely mistaken. (And I guess the view is most likely linked to thinking about language models without scaffolding built on top of them?)
Moreover from a safety perspective, I think it’s pretty important that agents built via scaffolding on top of language models have a strong natural transparency, which may make it one of the most desirable possible regimes to obtain general intelligence with.
(I wrote more about this here.)