At a glance, the only mentions of “long-term memory” in If Anyone Builds It, Everyone Dies (IABIED) by Yudkowsky and Soares are in the context of a short science fiction story. It’s just stipulated that a fictional AI system has it. It’s only briefly mentioned, and there is no explanation of what long-term memory consists of, how it works, or how it was developed. It’s similar to Data’s “positronic brain” in Star Trek: The Next Generation or any number of hand-waved technological concepts in sci-fi. Do you think this is a good answer to my objection? If so, can you explain why?
Unless there are other passages from the book that I’m missing (maybe because they use a different phrasing), the mentions of “long-term memory” in the book don’t seem to have the centrality to the authors’ arguments or predictions that you implied. I don’t see textual evidence that Yudkowsky and Soares “name continual learning as the major breakthrough required”, unless you count those very brief mentions in the sci-fi story.
I think one of the main problems with AI 2027′s story is the impossibility of using current AI systems for AI research, as I discussed in the post. There is a chicken-and-egg problem. LLMs are currently useless at actually doing research (as opposed to just helping with research as a search engine, in the exactly same way Google helps with research). AI systems with extremely weak generalization, extremely poor data efficiency, and without continual learning (or online learning) cannot plausibly do research well. Somehow this challenge has to be overcome, and obviously an AI that can’t do research can’t do the research to give itself the capabilities required to do research. (That would be an AI pulling itself up by its bootstraps. Or, in the terminology of the philosopher Daniel Dennett, it would be a skyhook.) So, it comes down to human researches to solve this challenge.
From what I’ve seen, when AI 2027 talks about an AI discovery/innovation being made by human researchers, the authors just kind of give their subjective best guess of how long that discovery/innovation will take to be made. This is not a new or original objection, of course, but I share the same objection as others. Any of these discoveries/innovations could take a tenth as long or ten times as long as the authors guess. So, AI 2027 isn’t a scientific model (which I don’t think is what the authors explicitly claim, although that’s the impression some people seem to have gotten). Rather, it’s an aggregation of a few people’s intuitions. It doesn’t really serve as a persuasive piece of argumentation for most people for mainly that reason.
At a glance, the only mentions of “long-term memory” in If Anyone Builds It, Everyone Dies (IABIED) by Yudkowsky and Soares are in the context of a short science fiction story. It’s just stipulated that a fictional AI system has it. It’s only briefly mentioned, and there is no explanation of what long-term memory consists of, how it works, or how it was developed. It’s similar to Data’s “positronic brain” in Star Trek: The Next Generation or any number of hand-waved technological concepts in sci-fi. Do you think this is a good answer to my objection? If so, can you explain why?
Unless there are other passages from the book that I’m missing (maybe because they use a different phrasing), the mentions of “long-term memory” in the book don’t seem to have the centrality to the authors’ arguments or predictions that you implied. I don’t see textual evidence that Yudkowsky and Soares “name continual learning as the major breakthrough required”, unless you count those very brief mentions in the sci-fi story.
I think one of the main problems with AI 2027′s story is the impossibility of using current AI systems for AI research, as I discussed in the post. There is a chicken-and-egg problem. LLMs are currently useless at actually doing research (as opposed to just helping with research as a search engine, in the exactly same way Google helps with research). AI systems with extremely weak generalization, extremely poor data efficiency, and without continual learning (or online learning) cannot plausibly do research well. Somehow this challenge has to be overcome, and obviously an AI that can’t do research can’t do the research to give itself the capabilities required to do research. (That would be an AI pulling itself up by its bootstraps. Or, in the terminology of the philosopher Daniel Dennett, it would be a skyhook.) So, it comes down to human researches to solve this challenge.
From what I’ve seen, when AI 2027 talks about an AI discovery/innovation being made by human researchers, the authors just kind of give their subjective best guess of how long that discovery/innovation will take to be made. This is not a new or original objection, of course, but I share the same objection as others. Any of these discoveries/innovations could take a tenth as long or ten times as long as the authors guess. So, AI 2027 isn’t a scientific model (which I don’t think is what the authors explicitly claim, although that’s the impression some people seem to have gotten). Rather, it’s an aggregation of a few people’s intuitions. It doesn’t really serve as a persuasive piece of argumentation for most people for mainly that reason.