At a glance, the only mentions of ālong-term memoryā in If Anyone Builds It, Everyone Dies (IABIED) by Yudkowsky and Soares are in the context of a short science fiction story. Itās just stipulated that a fictional AI system has it. Itās only briefly mentioned, and there is no explanation of what long-term memory consists of, how it works, or how it was developed. Itās similar to Dataās āpositronic brainā in Star Trek: The Next Generation or any number of hand-waved technological concepts in sci-fi. Do you think this is a good answer to my objection? If so, can you explain why?
Unless there are other passages from the book that Iām missing (maybe because they use a different phrasing), the mentions of ālong-term memoryā in the book donāt seem to have the centrality to the authorsā arguments or predictions that you implied. I donāt see textual evidence that Yudkowsky and Soares āname continual learning as the major breakthrough requiredā, unless you count those very brief mentions in the sci-fi story.
I think one of the main problems with AI 2027ā²s story is the impossibility of using current AI systems for AI research, as I discussed in the post. There is a chicken-and-egg problem. LLMs are currently useless at actually doing research (as opposed to just helping with research as a search engine, in the exactly same way Google helps with research). AI systems with extremely weak generalization, extremely poor data efficiency, and without continual learning (or online learning) cannot plausibly do research well. Somehow this challenge has to be overcome, and obviously an AI that canāt do research canāt do the research to give itself the capabilities required to do research. (That would be an AI pulling itself up by its bootstraps. Or, in the terminology of the philosopher Daniel Dennett, it would be a skyhook.) So, it comes down to human researches to solve this challenge.
From what Iāve seen, when AI 2027 talks about an AI discovery/āinnovation being made by human researchers, the authors just kind of give their subjective best guess of how long that discovery/āinnovation will take to be made. This is not a new or original objection, of course, but I share the same objection as others. Any of these discoveries/āinnovations could take a tenth as long or ten times as long as the authors guess. So, AI 2027 isnāt a scientific model (which I donāt think is what the authors explicitly claim, although thatās the impression some people seem to have gotten). Rather, itās an aggregation of a few peopleās intuitions. It doesnāt really serve as a persuasive piece of argumentation for most people for mainly that reason.
Itās called online learning in AI 2027 and human-like long-term memory in IABIED.
At a glance, the only mentions of ālong-term memoryā in If Anyone Builds It, Everyone Dies (IABIED) by Yudkowsky and Soares are in the context of a short science fiction story. Itās just stipulated that a fictional AI system has it. Itās only briefly mentioned, and there is no explanation of what long-term memory consists of, how it works, or how it was developed. Itās similar to Dataās āpositronic brainā in Star Trek: The Next Generation or any number of hand-waved technological concepts in sci-fi. Do you think this is a good answer to my objection? If so, can you explain why?
Unless there are other passages from the book that Iām missing (maybe because they use a different phrasing), the mentions of ālong-term memoryā in the book donāt seem to have the centrality to the authorsā arguments or predictions that you implied. I donāt see textual evidence that Yudkowsky and Soares āname continual learning as the major breakthrough requiredā, unless you count those very brief mentions in the sci-fi story.
I think one of the main problems with AI 2027ā²s story is the impossibility of using current AI systems for AI research, as I discussed in the post. There is a chicken-and-egg problem. LLMs are currently useless at actually doing research (as opposed to just helping with research as a search engine, in the exactly same way Google helps with research). AI systems with extremely weak generalization, extremely poor data efficiency, and without continual learning (or online learning) cannot plausibly do research well. Somehow this challenge has to be overcome, and obviously an AI that canāt do research canāt do the research to give itself the capabilities required to do research. (That would be an AI pulling itself up by its bootstraps. Or, in the terminology of the philosopher Daniel Dennett, it would be a skyhook.) So, it comes down to human researches to solve this challenge.
From what Iāve seen, when AI 2027 talks about an AI discovery/āinnovation being made by human researchers, the authors just kind of give their subjective best guess of how long that discovery/āinnovation will take to be made. This is not a new or original objection, of course, but I share the same objection as others. Any of these discoveries/āinnovations could take a tenth as long or ten times as long as the authors guess. So, AI 2027 isnāt a scientific model (which I donāt think is what the authors explicitly claim, although thatās the impression some people seem to have gotten). Rather, itās an aggregation of a few peopleās intuitions. It doesnāt really serve as a persuasive piece of argumentation for most people for mainly that reason.