The link about SayCan is interesting, but the environment looks very controlled and idiosyncratic, and the paper is quite unspecific on the link between the detailed instructions and its execution. It is clear that LLM is a layer between unspecific human instructions and detailed verbal instructions. The relation between those detailed verbal instructions and the final execution is not well described in the paper. The most interesting thing, that is robot-LLM feedback (whether the robot modifies the chain of instructions as a consequence of execution failure or success) is unclear. I find quite frustrating how descriptive, high level and “results” focused are all this corporate research papers. You cannot grasp what they have really done (the original Alpha Zero white paper! ).
Perhaps, but to be interested on defeating us, it needs to have “real world” interests. The space state chat GTP inhabits is massively made of text chains, her interests are mainly being an engaging chatter (she is the perfect embodiment of the Anglo chattering classes!). In fact, my anecdotal experience with chat GTP is that it is an incredible poet, but very dull in reasoning. The old joke about Keynes (too good a writer to trust his economics), but on a massive scale.
Now, if you train an AI in a physical like virtual word, and her training begins by physical recognition, and then, after that you move into linguistic training, the emergence of AGI would be at least possible. Currently, we have disparate successes in “navigation”, “object recognition”, “game playing”, and language processing, but IAs have not an executive brain, nor a realist internal world representation.
Regarding the links, I really find the two first links quite interesting. The timelines are reasonable (15 years is only 10% probability). What I find unreasonable is to regulate when we are still working on brain tissue. We need more integrative and volitive AI to have anything to regulate.
I am very skeptical of any use of development, growth, and other historical and classic economics tools for AI. At the end, the classic Popper arguments (in “the Poverty of historicism”) that science cannot be predicted are strong.
Economics is mainly about “equilibrium” results given preferences and technology. Economics is sound in turning (preferences, technolgy) as input and provide “goods allocations” as output. The evolution of preferences and technologies is exogenous to Economics. The landscape of still unknown production possibility frontiers is radically unknown.
On the other hand find Economics (=applied game theory) as an extremely useful tool to think and create the training world for Artificial Intelligence. As an economist, I find that enviroment is the most legible part of AI programming. Bulding interesting a games and tasks to train AIs is main part its development. Mechanism design (incidentally my current main interest), algorithmic game theory, or agent based economics is directly related to AI in a way no other “classical economics” branch.
First, I comment about my specific argument.
The link about SayCan is interesting, but the environment looks very controlled and idiosyncratic, and the paper is quite unspecific on the link between the detailed instructions and its execution. It is clear that LLM is a layer between unspecific human instructions and detailed verbal instructions. The relation between those detailed verbal instructions and the final execution is not well described in the paper. The most interesting thing, that is robot-LLM feedback (whether the robot modifies the chain of instructions as a consequence of execution failure or success) is unclear. I find quite frustrating how descriptive, high level and “results” focused are all this corporate research papers. You cannot grasp what they have really done (the original Alpha Zero white paper! ).
“Personally I believe that AI could pose a threat without physical embodiment”
Perhaps, but to be interested on defeating us, it needs to have “real world” interests. The space state chat GTP inhabits is massively made of text chains, her interests are mainly being an engaging chatter (she is the perfect embodiment of the Anglo chattering classes!). In fact, my anecdotal experience with chat GTP is that it is an incredible poet, but very dull in reasoning. The old joke about Keynes (too good a writer to trust his economics), but on a massive scale.
Now, if you train an AI in a physical like virtual word, and her training begins by physical recognition, and then, after that you move into linguistic training, the emergence of AGI would be at least possible. Currently, we have disparate successes in “navigation”, “object recognition”, “game playing”, and language processing, but IAs have not an executive brain, nor a realist internal world representation.
Ragardin the Bender and Koller paper, in March 2023 she was still quite sceptical of the semantic abilities of chat GTP. And chat GTP 4 is still easily fooled when you keep in mind that it does not understand… On the other hand, in my view it is a human level poet (in fact, far beyond the average person, almost in the top 0.1%). Her human or even super human verbal abilities, and its reasoning shortcomings are what can be expected of any (very good) text trained model.
Regarding the links, I really find the two first links quite interesting. The timelines are reasonable (15 years is only 10% probability). What I find unreasonable is to regulate when we are still working on brain tissue. We need more integrative and volitive AI to have anything to regulate.
I am very skeptical of any use of development, growth, and other historical and classic economics tools for AI. At the end, the classic Popper arguments (in “the Poverty of historicism”) that science cannot be predicted are strong.
Economics is mainly about “equilibrium” results given preferences and technology. Economics is sound in turning (preferences, technolgy) as input and provide “goods allocations” as output. The evolution of preferences and technologies is exogenous to Economics. The landscape of still unknown production possibility frontiers is radically unknown.
On the other hand find Economics (=applied game theory) as an extremely useful tool to think and create the training world for Artificial Intelligence. As an economist, I find that enviroment is the most legible part of AI programming. Bulding interesting a games and tasks to train AIs is main part its development. Mechanism design (incidentally my current main interest), algorithmic game theory, or agent based economics is directly related to AI in a way no other “classical economics” branch.