Probabilities or Bayesian inference work within finite sets, like in a game of chess or poker.
Mathematically, Probabilities can also be used for infinite sets.
For example, there is the uniform probability distribution over the real numbers between 0 and 1 (of which there are infinitely many).
To achieve AGI we will need to program the following:
knowledge creating processes
emotions
creativity
free will
consciousness.
With the exception of knowledge creating processes, these are just wrong in my opinion.
As a counterexample, AIXI can be formulated without any knowledge of emotions, creativity, free will, consciousness.
Approximations to AIXI can be programmed without any knowledge of these.
And AIXI is (at least) AGI level
(of course, AIXI is not real, non-approximated AIXI is impossible to build, and it is doubtful that approximating AIXI will be useful for building AGI; this is just an example that vast intelligence is possible without explicit programming of emmotions, etc.).
Example: If you are playing chess against a computer and it wins, what beat you? What beat you is the abstract knowledge which was embodied in that computer program. People put that knowledge there.
This is just wrong in the case of AlphaZero, where the knowledge was learned by training on self-played chess games, and not explicitly programmed in.
Mathematical problems (infinities) don’t need to reference the physical world. Math claims certainties, Science doesn’t. Science must reference the physical world.
AI like AlphaZero, will reveal inefficiencies and show us better ways to do many things. But it’s people that will find creative ways to utilize the information to create even better knowledge. AlphaZero did not create knowledge, rather it uncovered new efficiencies, and people can learn from that, but it takes a human to use what was uncovered to create new knowledge.
Great questions, I’m still putting some more thought into these. Thanks
Mathematically, Probabilities can also be used for infinite sets. For example, there is the uniform probability distribution over the real numbers between 0 and 1 (of which there are infinitely many).
With the exception of knowledge creating processes, these are just wrong in my opinion. As a counterexample, AIXI can be formulated without any knowledge of emotions, creativity, free will, consciousness. Approximations to AIXI can be programmed without any knowledge of these. And AIXI is (at least) AGI level (of course, AIXI is not real, non-approximated AIXI is impossible to build, and it is doubtful that approximating AIXI will be useful for building AGI; this is just an example that vast intelligence is possible without explicit programming of emmotions, etc.).
This is just wrong in the case of AlphaZero, where the knowledge was learned by training on self-played chess games, and not explicitly programmed in.
Mathematical problems (infinities) don’t need to reference the physical world. Math claims certainties, Science doesn’t. Science must reference the physical world.
AI like AlphaZero, will reveal inefficiencies and show us better ways to do many things. But it’s people that will find creative ways to utilize the information to create even better knowledge. AlphaZero did not create knowledge, rather it uncovered new efficiencies, and people can learn from that, but it takes a human to use what was uncovered to create new knowledge.
Great questions, I’m still putting some more thought into these.
Thanks