6 describes the AGI as a “species”—services are not a species, agents are a species. 4 and 5 as written describe the AGI as an agent—surely once the AGI is described as an “it” that is doing something certainly sounds like an independent agent to me. A service and an agent are fundamentally different in nature, they are not just a different view, as the outcome would depend on the objectives of the instructing agent.
I’ve actually spent a fair while thinking about CAIS, and written up my thoughts here. Overall I’m skeptical about the framework, but if it turns out to be accurate I think that would heavily mitigate arguments 1 and 2, somewhat mitigate 3, and not affect the others very much. Insofar as 4 and 5 describe AGI as an agent, that’s mostly because it’s linguistically natural to do so—I’ve now edited some of those phrases. 6b does describe AI as a species, but it’s unclear whether that conflicts with CAIS, insofar as the claim that AI will never be agentlike is a very strong one, and I’m not sure whether Drexler makes it explicitly (I discuss this point in the blog post I linked above).
“Skeptical about the framework” I do not agree with. Indeed it seems a useful model for how we as humans are. We become expert to varying degrees at a range of tasks or services through training—as we get in a car we turn on our “driving services” module (and sub modules) for example. And then underlying and separately we have our unconscious which drives the majority of our motivations as a “free agent”—our mammalian brain—which drives our socialising and norming actions, and then underneath that our limbic brain which deals with emotions like fear and status which in my experience are the things that “move the money” if they are encouraged.
It does not seem to me we are particularly “generally intelligent”. Put in a completely unfamiliar setting without all the tools that now prop us up, we will struggle far more than a species already familiar in that environment.
The intelligent agent approach to me takes the debate in the wrong direction, and most concerningly dramatically understates the near and present danger of utility maximising services (“this is not superintelligence”), such as this example discussed by Yuval Noah Harari and Tristan Harris.
I think this is a good comment about how the brain works, but do remember that the human brain can both hunt in packs and do physics. Most systems you might build to hunt are not able to do physics, and vice versa. We’re not perfectly competent, but we’re still general.
I agree that the extent to which individual humans are rational agents is often overstated. Nevertheless, there are many examples of humans who spend decades striving towards distant and abstract goals, who learn whatever skills and perform whatever tasks are required to reach them, and who strategically plan around or manipulate the actions of other people. If AGI is anywhere near as agentlike as humans in the sense of possessing the long-term goal-directedness I just described, that’s cause for significant concern.
A lifetime learning to be a 9th Dan master at go perhaps? Building on the back of thousands of years of human knowledge and wisdom? Demolished in hours.… I still look at the game and it looks incredibly abstract!!
Don’t get my wrong I am really concerned, I just consider the danger much closer than others, but also more soluble if we look at the right problem and ask the right questions.
6 describes the AGI as a “species”—services are not a species, agents are a species. 4 and 5 as written describe the AGI as an agent—surely once the AGI is described as an “it” that is doing something certainly sounds like an independent agent to me. A service and an agent are fundamentally different in nature, they are not just a different view, as the outcome would depend on the objectives of the instructing agent.
I’ve actually spent a fair while thinking about CAIS, and written up my thoughts here. Overall I’m skeptical about the framework, but if it turns out to be accurate I think that would heavily mitigate arguments 1 and 2, somewhat mitigate 3, and not affect the others very much. Insofar as 4 and 5 describe AGI as an agent, that’s mostly because it’s linguistically natural to do so—I’ve now edited some of those phrases. 6b does describe AI as a species, but it’s unclear whether that conflicts with CAIS, insofar as the claim that AI will never be agentlike is a very strong one, and I’m not sure whether Drexler makes it explicitly (I discuss this point in the blog post I linked above).
“Skeptical about the framework” I do not agree with. Indeed it seems a useful model for how we as humans are. We become expert to varying degrees at a range of tasks or services through training—as we get in a car we turn on our “driving services” module (and sub modules) for example. And then underlying and separately we have our unconscious which drives the majority of our motivations as a “free agent”—our mammalian brain—which drives our socialising and norming actions, and then underneath that our limbic brain which deals with emotions like fear and status which in my experience are the things that “move the money” if they are encouraged.
It does not seem to me we are particularly “generally intelligent”. Put in a completely unfamiliar setting without all the tools that now prop us up, we will struggle far more than a species already familiar in that environment.
The intelligent agent approach to me takes the debate in the wrong direction, and most concerningly dramatically understates the near and present danger of utility maximising services (“this is not superintelligence”), such as this example discussed by Yuval Noah Harari and Tristan Harris.
https://www.youtube.com/watch?v=v0sWeLZ8PXg
I think this is a good comment about how the brain works, but do remember that the human brain can both hunt in packs and do physics. Most systems you might build to hunt are not able to do physics, and vice versa. We’re not perfectly competent, but we’re still general.
I agree that the extent to which individual humans are rational agents is often overstated. Nevertheless, there are many examples of humans who spend decades striving towards distant and abstract goals, who learn whatever skills and perform whatever tasks are required to reach them, and who strategically plan around or manipulate the actions of other people. If AGI is anywhere near as agentlike as humans in the sense of possessing the long-term goal-directedness I just described, that’s cause for significant concern.
A lifetime learning to be a 9th Dan master at go perhaps? Building on the back of thousands of years of human knowledge and wisdom? Demolished in hours.… I still look at the game and it looks incredibly abstract!!
Don’t get my wrong I am really concerned, I just consider the danger much closer than others, but also more soluble if we look at the right problem and ask the right questions.