I’m was inspired to enter this contest to shed light on a worldview that may influence Future Fund’s plans regarding AGI. Future Funds listed three ideas bellow. I think the last possibility is where their focus should continue.
“As a result, we think it’s really possible that:
all of this AI stuff is a misguided sideshow,
we should be even more focused on AI, or
a bunch of this AI stuff is basically right, but we should be focusing on entirely different aspects of the problem.”
Do You Accept This Challenge?
I’m apprehensive about submitting this worldview, which disregards probabilities. This is a contest judged by super forecasters (our cultures prophets), about probabilities (our cultures prophesies); And I’m explaining that probabilities are irrelevant to the subject of this contest. Can you see the conflict? This is going to be tough for me to explain and for the judges to understand with an unbiased perspective. But I think we are all after the same thing. That is, we want to better understand reality and how to prepare for an uncertain future.
Intro
This worldview is a simplified attempt to explain how little we know in the required areas relating to Artificial General Intelligence, areas where we need a better understanding, before a genuine AGI is capable of being programmed, to THINK. Yes, a machine that can think, comprehend and explain things is what we need to qualify as an AGI.
AI vs AGI
AI- mindless machine. Includes, things we can explain and program.
AGI- what is required, is a mind running on a machine. Which cannot exclude knowledge creating processes (life), emotions, creativity, free will and consciousness.
AI can be better than humans at many things (dancing, chess, memory tasks, a finite list of things…) but not everything.
AGI will be better at everything and will have infinite potential. But to get an AGI, we have many hard problems to solve first.
Probabilities And Their Problems
*There are a lot more way to be wrong than to be right.
The first question one should ask is…Is AGI possible or impossible, not if it’s probable? How can probabilities not be relevant when referring to developing an AGI within a specified timeframe? I’ll start by pointing out the errors of probabilities in a universe which contain people. People are problem solvers, we cannot prophesies what knowledge people will create into the future, there are infinite possibilities of what we will come up with next, our future knowledge growth is unpredictable. Probabilities only work within finite sets, like in a game of chess or poker. But, knowledge is infinite and has no bounds. So, when it comes to humans solving problems, in the real world, probabilities are irrelevant.
When referring to AGI we are trying to understand what will work in the physical world. In reality there is no way of knowing if a thing is probably true or certainly true. We can never know if we are 100% right or if we are 90%, 95%, 99% correct. We can only be “less wrong”, by eliminating errors from our best held ideas.
Prophecy Vs Predictions
Imagine trying to explain the metaverse to anyone from 1901. Now keep that in mind for the following…
Yes, we can make predictions, like the outcome of some science experiments; Or we can use a mathematical formula, to predict the location of a planet in orbit 100 years from now. This is based on knowledge we have today. We can’t predict the knowledge we will have in the further, or else we would have it today. Notice how no one from 100 years ago made a story about todays best technologies? We can’t imagine most of our future tech. In addition, predicting a way in which our tech could harm us is easier that imagining how it could help us, this explains why pessimistic, dystopian Sci-fi movies are more common that optimistic Sci-fi movies that have solved our problems.
Using predictions, We can only guess so far. If we could predict the outcome of an experiment, more than one step at a time, why wouldn’t we just jump past the first step, or second step, to the outcomes of the next steps? The subsequent outcomes introduce new possibilities that were not possible before. Guessing some of those outcomes is prophesy, story telling, fun but not scientific.
Assigning probabilities on a genuine AGI before a specific time is prophetic. It’s similar to assigning a probability that our civilization will be wiped out before the end of the century. If prophecy were possible, we wouldn’t need incremental improvements in our ideas. How can we invent things that our next inventions will make possible happen only after they are invented. If we could prophecies, we would just jump the middle steps and invent the subsequent inventions. But we can’t. We have no idea what humans will come up with in the future, ideas are infinite and unpredictable.
This only touches on why we cannot forecast a probability of wether AGI will happen, in the real physical world, before a specific time. It’s prophesy, which is dependant on random luck.
(This understanding about probabilities takes time to come to terms with, it sure did we me).
The Knowledge Clock
For progress, it may help to think in terms of the speed of knowledge growths, not a date on a calendar or revolutions around the sun. Assigning an arbitrary due date on AGI, is non-science. Time isn’t a factor. The speed of our knowledge growth is our best metric, and this can’t be predicted.
If we can create the necessary knowledge regarding the entities I’ve listed in this worldview, then will will have AGI sooner or later but it depends on the speed of our knowledge growth first. Yes, people, us, we need to create this knowledge, the first AGI isn’t going to create itself.
Is AGI Possible Or Impossible?
There is no law of physics that will make AGI impossible to create. For example, it is possible for our human consciousness to exist, which is a wetware computer. A persons mind (software), running on a persons brain (hardware). We don’t have a reason why it would be impossible to recreate this. Therefore, we can deduce that it is possible to program an AGI, after we create the required knowledge.
To Program An AGI We Need More Knowledge About Knowledge
Perhaps we have all the necessary technology today to program an AGI. What we are lacking is the necessary knowledge on how to program it.
Knowledge is not something you can get pre-assembled off a shelf. For ever piece of knowledge there is a building process. Let’s identify the two types of knowledge that we know of:
Genes: The first knowledge creating process, that we know of. It is a mindless process. Genes create knowledge through adapting to an environment, using replication, variation and selection. The knowledge is embodied in genes. It’s a slow knowledge creating process.
Knowledge created in our minds: An intentional and much faster process. We create knowledge by recognizing problems and creatively guessing ideas for solutions (adapting to an environment). We guess, then criticize our guesses. This processes starts in our minds. It’s happening right now in you. I am not uploading knowledge into your brain. You are guessing what I’m writing about, comparing and criticizing those guesses, using your own knowledge. You are trying to understand the meaning of what I’m trying to share with you, then you have that idea compete with your current knowledge on the subject matter. It’s a battle of ideas in your mind. If you are able to be unbiased in your thoughts and criticize your own idea as well as the competing idea, the idea containing more errors can be discarded, leaving you with the better idea, therefore improving your knowledge. Transferring the meaning of ideas (replicating them) is hard to do. People are the only entity that can do it, and we do it imperfectly (with variation and selection).
Computers today are not creating any new knowledge. They are using the knowledge which people have already created only. People still need to feed the knowledge into the machine.
AGI needs their own creativity to solve real Problems?
*When I refer to problems, I mean problems, that relate to reality, not abstract mathematical problems that don’t need to reference the physical world. Math claims certain truth, science doesn’t. There is no certainty in reality, we can never be certain we have found the truth. What we want to do is to solve problems that make life more enjoyable, relieve suffering, understand more about reality. We do this by identifying, understanding and fixing our problems.
All problems are people problems. Without people, problems aren’t recognized. The dinosaurs didn’t know there was a problem before they went extinct and no other entity, that we know of, can understand problems either. An AGI must be programmed to deal with new problems.
An AGI needs creativity to solve new problems. Creativity is about creating something new, that didn’t exists before. People have the potential to solve an infinite number of problems. An AI has a finite set of problems it can solve. They are dependent on humans to program that finite set of problems. AI can not solve new problems which have never existed before. Creativity is an essential step in the knowledge creation process, it’s how we invent theories.
The method:
Problem —> Theory (this is where creativity is necessary) —> Error Correction (experiment) —> New Better Problem —> Repeat ( ∞ )…
We know this method works, because this process creates progress. We see things around us improve, and problems being solved.
An AGI needs creativity to help solve new problems. Understanding creativity is a hard problem which we do not fully understand yet.
Can AGI Evolve Artificially?
For computers to evolve to have AGI, like humans only faster, we would first need to fill the gaps in our understanding of how life emerged. We don’t yet know how inorganic material can become organic self replicating life forms. Our theories contain huge gaps, which we need to fill before the process can be understood then programmed.
Another idea for AGI to evolve is, we could try to recreate the universe in an artificial simulator. For this we would need to know all the laws of physics, then recreate our universe according to those laws in a computer simulation. This may or may not be possible, given the amount of physical material we would need for the computations and the time available before the end of the universe. Even then, we have a lot of learning to do first.
Will a computer become a person spontaneously, if we keep filling it full of human knowledge and increasing the speed and memory? No, that would be similar to the old theory of Lamarckism which Darwin replaced with a better theory, namely, evolution by natural selection.
Consciousness
We don’t know how much we need to comprehend in order to understand “consciousness”. Consciousness seems to be our minds subjective experiences. It would seem to emerge from the physical processes in our brain. Once we have understood consciousness, we can show this by programming it. David Deutsch (one of the god fathers of quantum computing) has a rule of thumb, “If you can’t program it, you haven’t understood it”. Meaning, when we can understand human consciousness well enough to program it into the software running on our computers, only then will we have a real AGI.
Abstractions and why are they important regarding AGI?
If you haven’t spent much time thinking about abstractions before, then what I write here will not be enough for you to understand them but it’s a start. It takes a lot of thinking about them before they are understood. Abstractions, are real, complex systems that have effects on the physical world. But they are not physical. They emerge from physical entities. The physical in which I am referring, is made of something tangible in our universe. Non-physical abstraction, are powered by the physical but do something else. Our mind is an abstraction. Our brains (physical) carrying knowledge (non physical). Yes, the knowledge is encoded in our physical brains, like a program in a computer. But it’s like another layer above the physical, which is our mind. Another way I’ve come to understand abstractions is, they are something that is more than the sum of its parts. The ’More” is referring to abstractions. And they are objectively real.
Today, computer programs contain abstractions. They are made out of atoms but they contain abstractions which can effect the world. Eg. If you are playing chess against a computer and it wins, what beat you? What beat you is the abstract knowledge which was embodied in that computer program. People put that knowledge there.
Our minds (like computer programs) are abstract. Non-physical entities, not made of atoms, which effect physical entities.
Understanding abstractions are a necessary step to achieving AGI. First we need a good explanation on how our abstract minds work, to get us closer to programming AGI. To create an AGI, we must program, our knowledge into, physical software to make possible an abstract entity like our mind.
AGI Progress So Far?
There hasn’t been any fundamental difference between todays computers and our original computers. They are still following the same philosophy, only faster, more memory and less error prone.
Today’s AI cannot genuinely pass the Turing test. Which is an AI that can fool a human judge into believing the AI is human. There are questions we can ask the AI, to test if the AI can understand something, anything. But as of yet, there is no understanding happening. Don’t expect SIRI to be your go-to companion any time soon, she’s going to be frustrating for a while still.
Conclusion
I think a real AGI will be a good thing. There are benefits that we can imagine and more that we can’t. Immortality comes to mind, populating to rest of the universe does as well.
People are a mind with an infinite repertoire of problem solving potential. After we understand our minds and program the AGI, it will be, by all definitions, a person. They will be able to understand things, they will be able to solve problems. They will be knowledge creators and explainers like us. And we will treat them like people.
Today, computers don’t have ideas, but people do. Computer don’t comprehend meaning from words, gestures, implications, symbols or anything at all. People do. For an AGI, what is needed is a knowledge creating, understanding and explaining program. We aren’t even close. It is possible to program an AGI. But “probably” having an AGI before a certain time is prophecy. Only after we understand human consciousness well enough can we begin the process of programming it.
Understanding that we can solve the many hard problems, needed to program an AGI, is how we deal with our unpredictable future. We can’t solve future problems today. But our knowledge continues to grow.
The Beginning…
* Please, before down voting could you explain why, then I can separate emotional votes from rational votes. I strongly encourage criticisms. After all, this is how knowledge growth works.
Worldview iPeople—Future Fund’s AI Worldview Prize
I’m was inspired to enter this contest to shed light on a worldview that may influence Future Fund’s plans regarding AGI. Future Funds listed three ideas bellow. I think the last possibility is where their focus should continue.
“As a result, we think it’s really possible that:
all of this AI stuff is a misguided sideshow,we should be even more focused on AI, ora bunch of this AI stuff is basically right, but we should be focusing on entirely different aspects of the problem.”
Do You Accept This Challenge?
I’m apprehensive about submitting this worldview, which disregards probabilities. This is a contest judged by super forecasters (our cultures prophets), about probabilities (our cultures prophesies); And I’m explaining that probabilities are irrelevant to the subject of this contest. Can you see the conflict? This is going to be tough for me to explain and for the judges to understand with an unbiased perspective. But I think we are all after the same thing. That is, we want to better understand reality and how to prepare for an uncertain future.
Intro
This worldview is a simplified attempt to explain how little we know in the required areas relating to Artificial General Intelligence, areas where we need a better understanding, before a genuine AGI is capable of being programmed, to THINK. Yes, a machine that can think, comprehend and explain things is what we need to qualify as an AGI.
AI vs AGI
AI- mindless machine. Includes, things we can explain and program.
AGI- what is required, is a mind running on a machine. Which cannot exclude knowledge creating processes (life), emotions, creativity, free will and consciousness.
AI can be better than humans at many things (dancing, chess, memory tasks, a finite list of things…) but not everything.
AGI will be better at everything and will have infinite potential. But to get an AGI, we have many hard problems to solve first.
Probabilities And Their Problems
*There are a lot more way to be wrong than to be right.
The first question one should ask is…Is AGI possible or impossible, not if it’s probable? How can probabilities not be relevant when referring to developing an AGI within a specified timeframe? I’ll start by pointing out the errors of probabilities in a universe which contain people. People are problem solvers, we cannot prophesies what knowledge people will create into the future, there are infinite possibilities of what we will come up with next, our future knowledge growth is unpredictable. Probabilities only work within finite sets, like in a game of chess or poker. But, knowledge is infinite and has no bounds. So, when it comes to humans solving problems, in the real world, probabilities are irrelevant.
When referring to AGI we are trying to understand what will work in the physical world. In reality there is no way of knowing if a thing is probably true or certainly true. We can never know if we are 100% right or if we are 90%, 95%, 99% correct. We can only be “less wrong”, by eliminating errors from our best held ideas.
Prophecy Vs Predictions
Imagine trying to explain the metaverse to anyone from 1901. Now keep that in mind for the following…
Yes, we can make predictions, like the outcome of some science experiments; Or we can use a mathematical formula, to predict the location of a planet in orbit 100 years from now. This is based on knowledge we have today. We can’t predict the knowledge we will have in the further, or else we would have it today. Notice how no one from 100 years ago made a story about todays best technologies? We can’t imagine most of our future tech. In addition, predicting a way in which our tech could harm us is easier that imagining how it could help us, this explains why pessimistic, dystopian Sci-fi movies are more common that optimistic Sci-fi movies that have solved our problems.
Using predictions, We can only guess so far. If we could predict the outcome of an experiment, more than one step at a time, why wouldn’t we just jump past the first step, or second step, to the outcomes of the next steps? The subsequent outcomes introduce new possibilities that were not possible before. Guessing some of those outcomes is prophesy, story telling, fun but not scientific.
Assigning probabilities on a genuine AGI before a specific time is prophetic. It’s similar to assigning a probability that our civilization will be wiped out before the end of the century. If prophecy were possible, we wouldn’t need incremental improvements in our ideas. How can we invent things that our next inventions will make possible happen only after they are invented. If we could prophecies, we would just jump the middle steps and invent the subsequent inventions. But we can’t. We have no idea what humans will come up with in the future, ideas are infinite and unpredictable.
This only touches on why we cannot forecast a probability of wether AGI will happen, in the real physical world, before a specific time. It’s prophesy, which is dependant on random luck.
(This understanding about probabilities takes time to come to terms with, it sure did we me).
The Knowledge Clock
For progress, it may help to think in terms of the speed of knowledge growths, not a date on a calendar or revolutions around the sun. Assigning an arbitrary due date on AGI, is non-science. Time isn’t a factor. The speed of our knowledge growth is our best metric, and this can’t be predicted.
If we can create the necessary knowledge regarding the entities I’ve listed in this worldview, then will will have AGI sooner or later but it depends on the speed of our knowledge growth first. Yes, people, us, we need to create this knowledge, the first AGI isn’t going to create itself.
Is AGI Possible Or Impossible?
There is no law of physics that will make AGI impossible to create. For example, it is possible for our human consciousness to exist, which is a wetware computer. A persons mind (software), running on a persons brain (hardware). We don’t have a reason why it would be impossible to recreate this. Therefore, we can deduce that it is possible to program an AGI, after we create the required knowledge.
To Program An AGI We Need More Knowledge About Knowledge
Perhaps we have all the necessary technology today to program an AGI. What we are lacking is the necessary knowledge on how to program it.
Knowledge is not something you can get pre-assembled off a shelf. For ever piece of knowledge there is a building process. Let’s identify the two types of knowledge that we know of:
Genes: The first knowledge creating process, that we know of. It is a mindless process. Genes create knowledge through adapting to an environment, using replication, variation and selection. The knowledge is embodied in genes. It’s a slow knowledge creating process.
Knowledge created in our minds: An intentional and much faster process. We create knowledge by recognizing problems and creatively guessing ideas for solutions (adapting to an environment). We guess, then criticize our guesses. This processes starts in our minds. It’s happening right now in you. I am not uploading knowledge into your brain. You are guessing what I’m writing about, comparing and criticizing those guesses, using your own knowledge. You are trying to understand the meaning of what I’m trying to share with you, then you have that idea compete with your current knowledge on the subject matter. It’s a battle of ideas in your mind. If you are able to be unbiased in your thoughts and criticize your own idea as well as the competing idea, the idea containing more errors can be discarded, leaving you with the better idea, therefore improving your knowledge. Transferring the meaning of ideas (replicating them) is hard to do. People are the only entity that can do it, and we do it imperfectly (with variation and selection).
Computers today are not creating any new knowledge. They are using the knowledge which people have already created only. People still need to feed the knowledge into the machine.
AGI needs their own creativity to solve real Problems?
*When I refer to problems, I mean problems, that relate to reality, not abstract mathematical problems that don’t need to reference the physical world. Math claims certain truth, science doesn’t. There is no certainty in reality, we can never be certain we have found the truth. What we want to do is to solve problems that make life more enjoyable, relieve suffering, understand more about reality. We do this by identifying, understanding and fixing our problems.
All problems are people problems. Without people, problems aren’t recognized. The dinosaurs didn’t know there was a problem before they went extinct and no other entity, that we know of, can understand problems either. An AGI must be programmed to deal with new problems.
An AGI needs creativity to solve new problems. Creativity is about creating something new, that didn’t exists before. People have the potential to solve an infinite number of problems. An AI has a finite set of problems it can solve. They are dependent on humans to program that finite set of problems. AI can not solve new problems which have never existed before. Creativity is an essential step in the knowledge creation process, it’s how we invent theories.
The method:
Problem —> Theory (this is where creativity is necessary) —> Error Correction (experiment) —> New Better Problem —> Repeat ( ∞ )…
We know this method works, because this process creates progress. We see things around us improve, and problems being solved.
An AGI needs creativity to help solve new problems. Understanding creativity is a hard problem which we do not fully understand yet.
Can AGI Evolve Artificially?
For computers to evolve to have AGI, like humans only faster, we would first need to fill the gaps in our understanding of how life emerged. We don’t yet know how inorganic material can become organic self replicating life forms. Our theories contain huge gaps, which we need to fill before the process can be understood then programmed.
Another idea for AGI to evolve is, we could try to recreate the universe in an artificial simulator. For this we would need to know all the laws of physics, then recreate our universe according to those laws in a computer simulation. This may or may not be possible, given the amount of physical material we would need for the computations and the time available before the end of the universe. Even then, we have a lot of learning to do first.
Will a computer become a person spontaneously, if we keep filling it full of human knowledge and increasing the speed and memory? No, that would be similar to the old theory of Lamarckism which Darwin replaced with a better theory, namely, evolution by natural selection.
Consciousness
We don’t know how much we need to comprehend in order to understand “consciousness”. Consciousness seems to be our minds subjective experiences. It would seem to emerge from the physical processes in our brain. Once we have understood consciousness, we can show this by programming it. David Deutsch (one of the god fathers of quantum computing) has a rule of thumb, “If you can’t program it, you haven’t understood it”. Meaning, when we can understand human consciousness well enough to program it into the software running on our computers, only then will we have a real AGI.
Abstractions and why are they important regarding AGI?
If you haven’t spent much time thinking about abstractions before, then what I write here will not be enough for you to understand them but it’s a start. It takes a lot of thinking about them before they are understood. Abstractions, are real, complex systems that have effects on the physical world. But they are not physical. They emerge from physical entities. The physical in which I am referring, is made of something tangible in our universe. Non-physical abstraction, are powered by the physical but do something else. Our mind is an abstraction. Our brains (physical) carrying knowledge (non physical). Yes, the knowledge is encoded in our physical brains, like a program in a computer. But it’s like another layer above the physical, which is our mind. Another way I’ve come to understand abstractions is, they are something that is more than the sum of its parts. The ’More” is referring to abstractions. And they are objectively real.
Today, computer programs contain abstractions. They are made out of atoms but they contain abstractions which can effect the world. Eg. If you are playing chess against a computer and it wins, what beat you? What beat you is the abstract knowledge which was embodied in that computer program. People put that knowledge there.
Our minds (like computer programs) are abstract. Non-physical entities, not made of atoms, which effect physical entities.
Understanding abstractions are a necessary step to achieving AGI. First we need a good explanation on how our abstract minds work, to get us closer to programming AGI. To create an AGI, we must program, our knowledge into, physical software to make possible an abstract entity like our mind.
AGI Progress So Far?
There hasn’t been any fundamental difference between todays computers and our original computers. They are still following the same philosophy, only faster, more memory and less error prone.
Today’s AI cannot genuinely pass the Turing test. Which is an AI that can fool a human judge into believing the AI is human. There are questions we can ask the AI, to test if the AI can understand something, anything. But as of yet, there is no understanding happening. Don’t expect SIRI to be your go-to companion any time soon, she’s going to be frustrating for a while still.
Conclusion
I think a real AGI will be a good thing. There are benefits that we can imagine and more that we can’t. Immortality comes to mind, populating to rest of the universe does as well.
People are a mind with an infinite repertoire of problem solving potential. After we understand our minds and program the AGI, it will be, by all definitions, a person. They will be able to understand things, they will be able to solve problems. They will be knowledge creators and explainers like us. And we will treat them like people.
Today, computers don’t have ideas, but people do. Computer don’t comprehend meaning from words, gestures, implications, symbols or anything at all. People do. For an AGI, what is needed is a knowledge creating, understanding and explaining program. We aren’t even close. It is possible to program an AGI. But “probably” having an AGI before a certain time is prophecy. Only after we understand human consciousness well enough can we begin the process of programming it.
Understanding that we can solve the many hard problems, needed to program an AGI, is how we deal with our unpredictable future. We can’t solve future problems today. But our knowledge continues to grow.
The Beginning…
* Please, before down voting could you explain why, then I can separate emotional votes from rational votes. I strongly encourage criticisms. After all, this is how knowledge growth works.