Here’s another example of someone in the LessWrong community thinking that LLMs won’t scale to AGI.
Was there another example before this? Steven Byrnes commented on one of my posts from October and we had an extended back-and-forth, so I’m a little bit familiar with his views.
Here’s another example of someone in the LessWrong community thinking that LLMs won’t scale to AGI.
Was there another example before this? Steven Byrnes commented on one of my posts from October and we had an extended back-and-forth, so I’m a little bit familiar with his views.