The missing link to AGI
Current AI stuff is OK but we will never get to AGI by making it bigger and better because some important things are missing from its foundation. One of the “fathers” of Deep Learning, Chief AI Scientist at Meta Yann LeCun recently claimed that “We’re not to the point where our intelligent machines have as much common sense as a cat. So, why don’t we start there?”
My meta research of the history of science in relevant fields shows clearly that in order to reach AGI we need to start from bacteria and a living cell, on one hand, and from theoretical physics and cosmology, on the other.
Starting with the “fathers” of psychology as a science Edward Thorndike and Ivan Pavlov scientists for more than a century know that there is a basic mechanism of universal learning installed in all living creatures including human beings. For most of the time this mechanism has been neglected by the mainstream of psychologists and neuroscientists as primitive, slow and inefficient. AI inherited that neglectance and magnified it to the extreme.
However, I have identified some scientists, theories and experiments which have reached such significant advances in the understanding of the universal learning mechanism that, in my opinion, AGI may emerge with high probability within a decade from now on.
Most probably, it will evolve from something like Xenobots, tiny synthetic creatures made by Michael Levin and his team from single cells of a frog by programming them according to mathematical models based on Karl Friston’s fundamental free energy principle. At a later stage mathematical models of these small but clever minds will be infused into humongous but stupid AI models making them smart and, even more importantly, alive.
It will be impossible to handle risks arising from the emergence of huge smart living machines after that shift happens. So we need to mitigate those risks at the stage of creation of basic simple minds. The time to do it is now.
An overwhelming amount of information on this subject including links to the original research is available in the manuscript of my book Learning Infinity: From One to Zero. It is a part of my submission for the prize and is available by following this link. https://docs.google.com/document/d/1kxz_siIZVLjRK6DxxrWDsiRD2bjTua-EOPPgAd-3KTI/edit?usp=sharing
#Future Fund worldview prize
Interesting post Yuri, but I am very confused about your claim that Pavlov’s ideas were ignored: “this mechanism has been neglected by the mainstream of psychologists”. My understanding is that the ideas inspired the U.S. school of Behaviorism where Watson and then Skinner pretty much ruled American psychology from 1920 to the mid 50s.
The Cognitive Revolution spearheaded by for example Chomsky, showed that simple rules of learning were not sufficient to explain adult competence. The debate has been revived in a modern form by deep learning, of course,
You are right. Early Pavlov’s ideas of stimulus-response learning and conditioning were not ignored. I had to be more specific that the latest idea of Pavlov about stimulus-stimulus learning was ignored. I’m working on a short paper that will summarise the book’s findings and will try to be more clear in it
Sorry, but downvoted because of what Noah Scales said. This work could be prize worthy, but as it stands it isn’t good.
Yuri, the prize submission criteria state:
“Past works that would have qualified for this prize include: Yudkowsky 2008, Superintelligence, Cotra 2020, Carlsmith 2021, and Karnofsky’s Most Important Century series. (While the above sources are lengthy, we’d prefer to offer a prize for a brief but persuasive argument.)”
“Only original work published after our prize is announced is eligible to win.”
Your book does not qualify for the prize, per se, but you should check with FTX whether a work you submit based on that book’s research might be suitable as a submission. I would have let someone else inform you, but a lot of posts to this forum don’t get any attention, apparently, particularly as the number of submissions (and comments) goes up.
Noah, I’ve made a very short original submission paper following your advise. Thank’s once again https://forum.effectivealtruism.org/posts/Js4uiJEahHhQBKE3h/agi-will-arrive-by-the-end-of-this-decade-either-as-a
Hmm, ok, well, under the tag “Future Fund Worldview Prize” there are several entries from you. When I wanted to remove some old entries under a certain tag (another different contest tag), I had to move my posts under that tag to drafts. I could not just remove the tag. It might have been a database updating issue, and could have resolved itself, but I did not want to wait and find out. Just an FYI if you are not wanting all your old cveres entries to be submitted.
OTOH, Nick Beckstead did say a person can submit multiple entries to address different questions.
Thank you very much for your feedback. I value you advise