Computers today are not creating any new knowledge. They are using the knowledge which people have already created only. People still need to feed the knowledge into the machine.
Well, if you compare Stockfish and Alpha Zero, Alpha Zero learned to play chess by playing itself, while Stockfish (at least older versions) was programmed by human experts. Alpha Zero reliably beats Stockfish.
You could say Alpha Zero has more knowledge of the game of chess than Stockfish, depending on how you define knowledge. It did not gain its knowledge directly from people. It learned it through trial and error guided by an algorithm.
An AGI needs creativity to solve new problems. Creativity is about creating something new, that didn’t exists before. People have the potential to solve an infinite number of problems. An AI has a finite set of problems it can solve. They are dependent on humans to program that finite set of problems. AI can not solve new problems which have never existed before. Creativity is an essential step in the knowledge creation process, it’s how we invent theories.
There’s plenty of examples of computers producing creative works, the latest round of AI art generators is an example.
People are a mind with an infinite repertoire of problem solving potential. After we understand our minds and program the AGI, it will be, by all definitions, a person. They will be able to understand things, they will be able to solve problems. They will be knowledge creators and explainers like us. And we will treat them like people.
Expert systems can solve some problems better than humans and perform inferences more reliably. Knowledge-bases don’t perform inferences, but coupled with an explanation module they can explain what they know enough to teach a person. In combination, an expert system can solve problems and explain knowledge. But it would still just be a dumb program, and doesn’t understand things. However, people might treat such dumb programs as people, like Eliza, for example, the old therapy program.
Whether my examples actually contradict you depends on definitions for “knowledge” and “understanding”. If you could define those terms explicitly, that might help me understand your article better.
Great questions and thank you for asking. I also had these questions come up in my own mind while learning this epistemology.
Here is how I understand the terms you mentioned:
Knowledge Information with influence. Or information that has causal power (ie. genes, ideas). Fundamentally knowledge is our best guesses.
Understanding Is part of a knowledge transfer process, which varies from subject to subject. It is the rebuilding of knowledge in ones own mind. In people it’s an attempt to replicate a piece of knowledge.
Trail and Error—Yes, I agree Alpha zero has more knowledge than Stockfish, but it’s not new knowledge to the world. Please let me try to explain, because this question also puzzled me for a while. A kind of trail and error happens in evolution as well. Genes create knowledge about the environment they live in buy replicating, with different variations (trial), and dying (error). Couldn’t a computer program do the same thing, only faster? I think it can. But in a simulated environment that people created. The difference is, genes have access to a niche in the physical world, where they confront problems in nature. They solve these problems or they go extinct. A computer program doesn’t have the same access to our physical environment. Therefore people must simulate it. But we still don’t know enough about our own environment to simulate it accurately enough, we have huge gaps in our knowledge about the laws of nature.
When a chess program, programs it’s own rules and step out of its’ game, that would hint at AGI.
Creativity in AI art generators—What you are seeing does not involve the creative process. Original art is being displayed and can be misunderstood as creative. It’s an algorithm made by people, to combine a variation of images, based on our inputs. The images are new an have never been seen before. But it’s not a creative, problem solving process that is happening.
I agree, there will be many cases where our AI will be useful and help people solve their problems, like Elisa whom you mentioned. People are still behind the scenes pulling the strings. And when people create new knowledge (like a deeper understanding phycology) we will include it in our programs and Elisa will work much better.
I really appreciate your questions. If you have anymore please don’t hesitate to ask.
What if the knowledge developed by giving a computer program a model of an environment, and then letting the program run along with an algorithm, surprises people with its insight? For example, people study Alpha Zero chess play because it is so novel. It violates what are thought be the basics of chess tactics and reveals new strategies of play. The knowledge “has influence” of a type.
I’m tempted to interpret you as believing that computers do not produce knowledge about an environment that people do not already have about the environment model(for example, the rules of chess) that they give a program (a learning algorithm). However, computer programs do produce surprising knowledge of some influence(for example, Alpha Zero’s superior style of play) that was unknown to humans who programmed the computer program.
As far as development of depth of understanding, some work in automated theorem proving in geometry goes back several decades, and provided novel proofs of geometry theorems as far back as 1956. A proof of a theorem doesn’t qualify as a new theory, but it could show “depth of understanding”.
Then there’s developing new theories. Software is having success generating its own hypotheses. Here’s a quote from the linked article on Scientific American:
Many fields may soon turn to the muse of machine learning in an attempt to speed up the scientific process and reduce human biases.
The article linked from the “muse” link is about ai and artistic creativity.
In general, I don’t believe that the AI tools we use now show autonomous thought and consciousness with any continuity. In that way, they do not have our intelligence. However, I am not convinced by our discussion that we humans distinguish ourselves from AI in terms of capabilities for knowledge or understanding, as you have defined those terms.
I think we will learn a lot from AI. It will reveal inefficiencies and show us better ways to do many things. But it’s people that will find creative ways to utilize the information to create even better knowledge. AlphaZero did not create knowledge, rather it uncovered new efficiencies, and people can learn from that, but it takes a human to use what was uncovered to create new knowledge.
Alpha zero (machine learning) vs problem solving about the nature of reality:
Alpha zero is given the basic rules of the game (people invented these rules).
Then it plays a game with finite moves on a finite board. It finds the most efficient ways to win (this is where Bayesian induction works).
Now graft the game over our reality, which includes a board with infinite squares and infinite new sets of problem arise. For instance, new pieces show up regularly and the rules for them are unknown. How would alpha zero solve these new problems? It can’t, it doesn’t have the necessary problem solving capabilities which people have. What AI needs is rational criticism or creativity with error correction abilities.
Games in general solved a problem for people (this introduces a new topic but it relevant nonetheless):
Imagine if Aphazero wasn’t given the general rules of the game chess. What would happen next? The program needs to be able to identify a problem before continuing.
People had a problem of being bored. We invented games as a temporary solution to boredom.
Does an AI get bored? No. So how could it invent games (if games weren’t invented yet)? It couldn’t, not without us, because it wouldn’t know it had a problem.
The article you linked to:
Yes, we will have many uses for machine learning and AI. And it will help people come up with better hypotheses and to solve complex (mathematical) problems and improve our lives. Notice, these are complex problems, like sifting through big data and combining variables, but no creativity is needed. The problems that I am referring to are problems about understanding the nature of reality. The article refers to a machine which is going though the same trial and error process as the AlphaZero algorithm mention earlier. But, it’s People who created the ranking system of the chemical combinations mention in the article, the same way people created the game and rules of chess which AlphaZero plays. People identified the problems and solved them using conjectures and refutations. After the rules are in place, the algorithm can take over.
Lastly, it’s people that interpret the results and come up with explanations to make any of this useful.
AI—finite problem solving capabilities.(Baysianism works here)
People and AGI- infinite problem solving capabilities. (Popperian works here)
It’s a huge gap from one to the next.
I don’t expect you to be convinced by my explanation. It took me years of carrying this epistemology around in my head, learning more from Popper and David Deutsch, and the like, to make sense of it. It’s a work in progress.
Thanks for your great questions, this is fun for me. It’s also helping me think of ways to better explain this worldview.
You’re welcome, and thanks for the reply. I’m enjoying our conversation.
What about:
ai art as an example of human creativity
ai generating hypotheses that humans could not, seemingly demonstrating human creativity
ai generating theorems (conjectures, refutations), in old systems back in the 60′s
If the concerns are:
creativity in response to real-world events
ability to increase understanding of a novel environment without aid from a predefined ontology, except for testing behaviors learned by mimicry
ability to improve epistemological distinctions
then I think future developments in robotics will satisfy human intuitions of what it takes for an agi to be an AGI. We can see the analogies between robot behavior and human behavior more easily, and they will be an easier proof of AGI functionality of the kind that your worldview denies.
EDIT:When the robots are controlled or communicated with by external AI using input from robot sensors or external sensors, we will have a fuller idea of the varieties of experience and learning that are humanlike that AI can demonstrate.
Well, if you compare Stockfish and Alpha Zero, Alpha Zero learned to play chess by playing itself, while Stockfish (at least older versions) was programmed by human experts. Alpha Zero reliably beats Stockfish.
You could say Alpha Zero has more knowledge of the game of chess than Stockfish, depending on how you define knowledge. It did not gain its knowledge directly from people. It learned it through trial and error guided by an algorithm.
There’s plenty of examples of computers producing creative works, the latest round of AI art generators is an example.
Expert systems can solve some problems better than humans and perform inferences more reliably. Knowledge-bases don’t perform inferences, but coupled with an explanation module they can explain what they know enough to teach a person. In combination, an expert system can solve problems and explain knowledge. But it would still just be a dumb program, and doesn’t understand things. However, people might treat such dumb programs as people, like Eliza, for example, the old therapy program.
Whether my examples actually contradict you depends on definitions for “knowledge” and “understanding”. If you could define those terms explicitly, that might help me understand your article better.
Great questions and thank you for asking. I also had these questions come up in my own mind while learning this epistemology.
Here is how I understand the terms you mentioned:
Knowledge Information with influence. Or information that has causal power (ie. genes, ideas). Fundamentally knowledge is our best guesses.
Understanding Is part of a knowledge transfer process, which varies from subject to subject. It is the rebuilding of knowledge in ones own mind. In people it’s an attempt to replicate a piece of knowledge.
Trail and Error—Yes, I agree Alpha zero has more knowledge than Stockfish, but it’s not new knowledge to the world. Please let me try to explain, because this question also puzzled me for a while. A kind of trail and error happens in evolution as well. Genes create knowledge about the environment they live in buy replicating, with different variations (trial), and dying (error). Couldn’t a computer program do the same thing, only faster? I think it can. But in a simulated environment that people created. The difference is, genes have access to a niche in the physical world, where they confront problems in nature. They solve these problems or they go extinct. A computer program doesn’t have the same access to our physical environment. Therefore people must simulate it. But we still don’t know enough about our own environment to simulate it accurately enough, we have huge gaps in our knowledge about the laws of nature.
When a chess program, programs it’s own rules and step out of its’ game, that would hint at AGI.
Creativity in AI art generators—What you are seeing does not involve the creative process. Original art is being displayed and can be misunderstood as creative. It’s an algorithm made by people, to combine a variation of images, based on our inputs. The images are new an have never been seen before. But it’s not a creative, problem solving process that is happening.
I agree, there will be many cases where our AI will be useful and help people solve their problems, like Elisa whom you mentioned. People are still behind the scenes pulling the strings. And when people create new knowledge (like a deeper understanding phycology) we will include it in our programs and Elisa will work much better.
I really appreciate your questions. If you have anymore please don’t hesitate to ask.
What if the knowledge developed by giving a computer program a model of an environment, and then letting the program run along with an algorithm, surprises people with its insight? For example, people study Alpha Zero chess play because it is so novel. It violates what are thought be the basics of chess tactics and reveals new strategies of play. The knowledge “has influence” of a type.
I’m tempted to interpret you as believing that computers do not produce knowledge about an environment that people do not already have about the environment model(for example, the rules of chess) that they give a program (a learning algorithm). However, computer programs do produce surprising knowledge of some influence(for example, Alpha Zero’s superior style of play) that was unknown to humans who programmed the computer program.
As far as development of depth of understanding, some work in automated theorem proving in geometry goes back several decades, and provided novel proofs of geometry theorems as far back as 1956. A proof of a theorem doesn’t qualify as a new theory, but it could show “depth of understanding”.
Then there’s developing new theories. Software is having success generating its own hypotheses. Here’s a quote from the linked article on Scientific American:
The article linked from the “muse” link is about ai and artistic creativity.
In general, I don’t believe that the AI tools we use now show autonomous thought and consciousness with any continuity. In that way, they do not have our intelligence. However, I am not convinced by our discussion that we humans distinguish ourselves from AI in terms of capabilities for knowledge or understanding, as you have defined those terms.
I think we will learn a lot from AI. It will reveal inefficiencies and show us better ways to do many things. But it’s people that will find creative ways to utilize the information to create even better knowledge. AlphaZero did not create knowledge, rather it uncovered new efficiencies, and people can learn from that, but it takes a human to use what was uncovered to create new knowledge.
Alpha zero (machine learning) vs problem solving about the nature of reality:
Alpha zero is given the basic rules of the game (people invented these rules).
Then it plays a game with finite moves on a finite board. It finds the most efficient ways to win (this is where Bayesian induction works).
Now graft the game over our reality, which includes a board with infinite squares and infinite new sets of problem arise. For instance, new pieces show up regularly and the rules for them are unknown. How would alpha zero solve these new problems? It can’t, it doesn’t have the necessary problem solving capabilities which people have. What AI needs is rational criticism or creativity with error correction abilities.
Games in general solved a problem for people (this introduces a new topic but it relevant nonetheless):
Imagine if Aphazero wasn’t given the general rules of the game chess. What would happen next? The program needs to be able to identify a problem before continuing.
People had a problem of being bored. We invented games as a temporary solution to boredom.
Does an AI get bored? No. So how could it invent games (if games weren’t invented yet)? It couldn’t, not without us, because it wouldn’t know it had a problem.
The article you linked to:
Yes, we will have many uses for machine learning and AI. And it will help people come up with better hypotheses and to solve complex (mathematical) problems and improve our lives. Notice, these are complex problems, like sifting through big data and combining variables, but no creativity is needed. The problems that I am referring to are problems about understanding the nature of reality. The article refers to a machine which is going though the same trial and error process as the AlphaZero algorithm mention earlier. But, it’s People who created the ranking system of the chemical combinations mention in the article, the same way people created the game and rules of chess which AlphaZero plays. People identified the problems and solved them using conjectures and refutations. After the rules are in place, the algorithm can take over.
Lastly, it’s people that interpret the results and come up with explanations to make any of this useful.
AI—finite problem solving capabilities.(Baysianism works here)
People and AGI- infinite problem solving capabilities. (Popperian works here)
It’s a huge gap from one to the next.
I don’t expect you to be convinced by my explanation. It took me years of carrying this epistemology around in my head, learning more from Popper and David Deutsch, and the like, to make sense of it. It’s a work in progress.
Thanks for your great questions, this is fun for me. It’s also helping me think of ways to better explain this worldview.
You’re welcome, and thanks for the reply. I’m enjoying our conversation.
What about:
ai art as an example of human creativity
ai generating hypotheses that humans could not, seemingly demonstrating human creativity
ai generating theorems (conjectures, refutations), in old systems back in the 60′s
If the concerns are:
creativity in response to real-world events
ability to increase understanding of a novel environment without aid from a predefined ontology, except for testing behaviors learned by mimicry
ability to improve epistemological distinctions
then I think future developments in robotics will satisfy human intuitions of what it takes for an agi to be an AGI. We can see the analogies between robot behavior and human behavior more easily, and they will be an easier proof of AGI functionality of the kind that your worldview denies.
EDIT:When the robots are controlled or communicated with by external AI using input from robot sensors or external sensors, we will have a fuller idea of the varieties of experience and learning that are humanlike that AI can demonstrate.