What if the knowledge developed by giving a computer program a model of an environment, and then letting the program run along with an algorithm, surprises people with its insight? For example, people study Alpha Zero chess play because it is so novel. It violates what are thought be the basics of chess tactics and reveals new strategies of play. The knowledge “has influence” of a type.
I’m tempted to interpret you as believing that computers do not produce knowledge about an environment that people do not already have about the environment model(for example, the rules of chess) that they give a program (a learning algorithm). However, computer programs do produce surprising knowledge of some influence(for example, Alpha Zero’s superior style of play) that was unknown to humans who programmed the computer program.
As far as development of depth of understanding, some work in automated theorem proving in geometry goes back several decades, and provided novel proofs of geometry theorems as far back as 1956. A proof of a theorem doesn’t qualify as a new theory, but it could show “depth of understanding”.
Then there’s developing new theories. Software is having success generating its own hypotheses. Here’s a quote from the linked article on Scientific American:
Many fields may soon turn to the muse of machine learning in an attempt to speed up the scientific process and reduce human biases.
The article linked from the “muse” link is about ai and artistic creativity.
In general, I don’t believe that the AI tools we use now show autonomous thought and consciousness with any continuity. In that way, they do not have our intelligence. However, I am not convinced by our discussion that we humans distinguish ourselves from AI in terms of capabilities for knowledge or understanding, as you have defined those terms.
I think we will learn a lot from AI. It will reveal inefficiencies and show us better ways to do many things. But it’s people that will find creative ways to utilize the information to create even better knowledge. AlphaZero did not create knowledge, rather it uncovered new efficiencies, and people can learn from that, but it takes a human to use what was uncovered to create new knowledge.
Alpha zero (machine learning) vs problem solving about the nature of reality:
Alpha zero is given the basic rules of the game (people invented these rules).
Then it plays a game with finite moves on a finite board. It finds the most efficient ways to win (this is where Bayesian induction works).
Now graft the game over our reality, which includes a board with infinite squares and infinite new sets of problem arise. For instance, new pieces show up regularly and the rules for them are unknown. How would alpha zero solve these new problems? It can’t, it doesn’t have the necessary problem solving capabilities which people have. What AI needs is rational criticism or creativity with error correction abilities.
Games in general solved a problem for people (this introduces a new topic but it relevant nonetheless):
Imagine if Aphazero wasn’t given the general rules of the game chess. What would happen next? The program needs to be able to identify a problem before continuing.
People had a problem of being bored. We invented games as a temporary solution to boredom.
Does an AI get bored? No. So how could it invent games (if games weren’t invented yet)? It couldn’t, not without us, because it wouldn’t know it had a problem.
The article you linked to:
Yes, we will have many uses for machine learning and AI. And it will help people come up with better hypotheses and to solve complex (mathematical) problems and improve our lives. Notice, these are complex problems, like sifting through big data and combining variables, but no creativity is needed. The problems that I am referring to are problems about understanding the nature of reality. The article refers to a machine which is going though the same trial and error process as the AlphaZero algorithm mention earlier. But, it’s People who created the ranking system of the chemical combinations mention in the article, the same way people created the game and rules of chess which AlphaZero plays. People identified the problems and solved them using conjectures and refutations. After the rules are in place, the algorithm can take over.
Lastly, it’s people that interpret the results and come up with explanations to make any of this useful.
AI—finite problem solving capabilities.(Baysianism works here)
People and AGI- infinite problem solving capabilities. (Popperian works here)
It’s a huge gap from one to the next.
I don’t expect you to be convinced by my explanation. It took me years of carrying this epistemology around in my head, learning more from Popper and David Deutsch, and the like, to make sense of it. It’s a work in progress.
Thanks for your great questions, this is fun for me. It’s also helping me think of ways to better explain this worldview.
You’re welcome, and thanks for the reply. I’m enjoying our conversation.
What about:
ai art as an example of human creativity
ai generating hypotheses that humans could not, seemingly demonstrating human creativity
ai generating theorems (conjectures, refutations), in old systems back in the 60′s
If the concerns are:
creativity in response to real-world events
ability to increase understanding of a novel environment without aid from a predefined ontology, except for testing behaviors learned by mimicry
ability to improve epistemological distinctions
then I think future developments in robotics will satisfy human intuitions of what it takes for an agi to be an AGI. We can see the analogies between robot behavior and human behavior more easily, and they will be an easier proof of AGI functionality of the kind that your worldview denies.
EDIT:When the robots are controlled or communicated with by external AI using input from robot sensors or external sensors, we will have a fuller idea of the varieties of experience and learning that are humanlike that AI can demonstrate.
What if the knowledge developed by giving a computer program a model of an environment, and then letting the program run along with an algorithm, surprises people with its insight? For example, people study Alpha Zero chess play because it is so novel. It violates what are thought be the basics of chess tactics and reveals new strategies of play. The knowledge “has influence” of a type.
I’m tempted to interpret you as believing that computers do not produce knowledge about an environment that people do not already have about the environment model(for example, the rules of chess) that they give a program (a learning algorithm). However, computer programs do produce surprising knowledge of some influence(for example, Alpha Zero’s superior style of play) that was unknown to humans who programmed the computer program.
As far as development of depth of understanding, some work in automated theorem proving in geometry goes back several decades, and provided novel proofs of geometry theorems as far back as 1956. A proof of a theorem doesn’t qualify as a new theory, but it could show “depth of understanding”.
Then there’s developing new theories. Software is having success generating its own hypotheses. Here’s a quote from the linked article on Scientific American:
The article linked from the “muse” link is about ai and artistic creativity.
In general, I don’t believe that the AI tools we use now show autonomous thought and consciousness with any continuity. In that way, they do not have our intelligence. However, I am not convinced by our discussion that we humans distinguish ourselves from AI in terms of capabilities for knowledge or understanding, as you have defined those terms.
I think we will learn a lot from AI. It will reveal inefficiencies and show us better ways to do many things. But it’s people that will find creative ways to utilize the information to create even better knowledge. AlphaZero did not create knowledge, rather it uncovered new efficiencies, and people can learn from that, but it takes a human to use what was uncovered to create new knowledge.
Alpha zero (machine learning) vs problem solving about the nature of reality:
Alpha zero is given the basic rules of the game (people invented these rules).
Then it plays a game with finite moves on a finite board. It finds the most efficient ways to win (this is where Bayesian induction works).
Now graft the game over our reality, which includes a board with infinite squares and infinite new sets of problem arise. For instance, new pieces show up regularly and the rules for them are unknown. How would alpha zero solve these new problems? It can’t, it doesn’t have the necessary problem solving capabilities which people have. What AI needs is rational criticism or creativity with error correction abilities.
Games in general solved a problem for people (this introduces a new topic but it relevant nonetheless):
Imagine if Aphazero wasn’t given the general rules of the game chess. What would happen next? The program needs to be able to identify a problem before continuing.
People had a problem of being bored. We invented games as a temporary solution to boredom.
Does an AI get bored? No. So how could it invent games (if games weren’t invented yet)? It couldn’t, not without us, because it wouldn’t know it had a problem.
The article you linked to:
Yes, we will have many uses for machine learning and AI. And it will help people come up with better hypotheses and to solve complex (mathematical) problems and improve our lives. Notice, these are complex problems, like sifting through big data and combining variables, but no creativity is needed. The problems that I am referring to are problems about understanding the nature of reality. The article refers to a machine which is going though the same trial and error process as the AlphaZero algorithm mention earlier. But, it’s People who created the ranking system of the chemical combinations mention in the article, the same way people created the game and rules of chess which AlphaZero plays. People identified the problems and solved them using conjectures and refutations. After the rules are in place, the algorithm can take over.
Lastly, it’s people that interpret the results and come up with explanations to make any of this useful.
AI—finite problem solving capabilities.(Baysianism works here)
People and AGI- infinite problem solving capabilities. (Popperian works here)
It’s a huge gap from one to the next.
I don’t expect you to be convinced by my explanation. It took me years of carrying this epistemology around in my head, learning more from Popper and David Deutsch, and the like, to make sense of it. It’s a work in progress.
Thanks for your great questions, this is fun for me. It’s also helping me think of ways to better explain this worldview.
You’re welcome, and thanks for the reply. I’m enjoying our conversation.
What about:
ai art as an example of human creativity
ai generating hypotheses that humans could not, seemingly demonstrating human creativity
ai generating theorems (conjectures, refutations), in old systems back in the 60′s
If the concerns are:
creativity in response to real-world events
ability to increase understanding of a novel environment without aid from a predefined ontology, except for testing behaviors learned by mimicry
ability to improve epistemological distinctions
then I think future developments in robotics will satisfy human intuitions of what it takes for an agi to be an AGI. We can see the analogies between robot behavior and human behavior more easily, and they will be an easier proof of AGI functionality of the kind that your worldview denies.
EDIT:When the robots are controlled or communicated with by external AI using input from robot sensors or external sensors, we will have a fuller idea of the varieties of experience and learning that are humanlike that AI can demonstrate.