Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Hi Arturo. Thank you for the thoughtful and detailed assessment of the AI risk literature. Here are a few other sources you might be interested in reading:
AI Timelines: Where the arguments and the “experts” stand summarizes key sources of evidence on AI timelines. Namely, it finds that AI researchers believe AGI will likely arrive within the next few decades, that the human brain uses more computational power than today’s largest AI models but that future models will soon surpass human levels of compute, and that economic history suggests transformative changes to growth regimes are absolutely possible.
Jacob Cannell provides more details on the amount of computational power used by various biological and artificial systems. “The Table” is quite jarring to me.
Economic Growth Under Transformative AI by Phil Trammell and Anton Korinek reviews the growth theory literature in economics, finding that mainstream theories of economic growth admit the possibility of a “singularity” driven by artificial intelligence.
Tom Davidson’s model uses growth theory to specifically model AI progress. He assumes that AI will be able to perform 100% of economically relevant tasks once it uses the same amount of computation as the human brain. The model shows that this would lead to “fast takeoff”: the world will look very normal, yet in a matter of only a few years could see >30% GDP growth and the advent of superintelligent AI systems.
Natural Selection Favors AIs over Humans makes an argument that doesn’t depend on how far we are away from AGI—it will apply whenever advanced AI comes around.
To respond to your specific argument that:
To make an affirmative case, there has been lots of work using ChatGPT to operate in the physical world. Google’s SayCan found that their PaLM (a language model trained just like GPT) was successfully able to operate a robot in a physical environment. The PiQA benchmark shows that language models perform worse than humans but far better than random chance in answering commonsense questions about the physical world.
Moreover, recent work has given language models additional sensory modalities so they might transcend the world of text. ChatGPT plugins allows a language model to interact with any digital software interface that can be accessed via the web or code. GPT-4 is trained on both images and text. GATO is a single network trained on text, images, robotic control, and game playing. Personally I believe that AI could pose a threat without physical embodiment, but the possibility of physical embodiment is far from distant and has seen important progress over the past several years.
Historically, people like Gary Marcus and Emily Bender have been making that argument for years, but their predictions have largely ended up incorrect. Bender and Koller’s famous paper argues that language models trained on text will never be able to understand the physical world. Their prove their argument with a prompt in Appendix A on which GPT-2 performs terribly, but if you plug their prompt or any similar styling into ChatGPT, you’ll find that it clearly perceives the physical world. Many have doubted the language model paradigm, and so far, their predictions don’t hold up well.
First, I comment about my specific argument.
The link about SayCan is interesting, but the environment looks very controlled and idiosyncratic, and the paper is quite unspecific on the link between the detailed instructions and its execution. It is clear that LLM is a layer between unspecific human instructions and detailed verbal instructions. The relation between those detailed verbal instructions and the final execution is not well described in the paper. The most interesting thing, that is robot-LLM feedback (whether the robot modifies the chain of instructions as a consequence of execution failure or success) is unclear. I find quite frustrating how descriptive, high level and “results” focused are all this corporate research papers. You cannot grasp what they have really done (the original Alpha Zero white paper! ).
“Personally I believe that AI could pose a threat without physical embodiment”
Perhaps, but to be interested on defeating us, it needs to have “real world” interests. The space state chat GTP inhabits is massively made of text chains, her interests are mainly being an engaging chatter (she is the perfect embodiment of the Anglo chattering classes!). In fact, my anecdotal experience with chat GTP is that it is an incredible poet, but very dull in reasoning. The old joke about Keynes (too good a writer to trust his economics), but on a massive scale.
Now, if you train an AI in a physical like virtual word, and her training begins by physical recognition, and then, after that you move into linguistic training, the emergence of AGI would be at least possible. Currently, we have disparate successes in “navigation”, “object recognition”, “game playing”, and language processing, but IAs have not an executive brain, nor a realist internal world representation.
Ragardin the Bender and Koller paper, in March 2023 she was still quite sceptical of the semantic abilities of chat GTP. And chat GTP 4 is still easily fooled when you keep in mind that it does not understand… On the other hand, in my view it is a human level poet (in fact, far beyond the average person, almost in the top 0.1%). Her human or even super human verbal abilities, and its reasoning shortcomings are what can be expected of any (very good) text trained model.
Regarding the links, I really find the two first links quite interesting. The timelines are reasonable (15 years is only 10% probability). What I find unreasonable is to regulate when we are still working on brain tissue. We need more integrative and volitive AI to have anything to regulate.
I am very skeptical of any use of development, growth, and other historical and classic economics tools for AI. At the end, the classic Popper arguments (in “the Poverty of historicism”) that science cannot be predicted are strong.
Economics is mainly about “equilibrium” results given preferences and technology. Economics is sound in turning (preferences, technolgy) as input and provide “goods allocations” as output. The evolution of preferences and technologies is exogenous to Economics. The landscape of still unknown production possibility frontiers is radically unknown.
On the other hand find Economics (=applied game theory) as an extremely useful tool to think and create the training world for Artificial Intelligence. As an economist, I find that enviroment is the most legible part of AI programming. Bulding interesting a games and tasks to train AIs is main part its development. Mechanism design (incidentally my current main interest), algorithmic game theory, or agent based economics is directly related to AI in a way no other “classical economics” branch.