For example, I can imagine a system constructed out of a huge number of ‘IF X THEN Y’ statements (reflexive responses), like ‘if body is in hallway, move North’, ‘if hands are by legs and body is in kitchen, raise hands to waist’.., equivalent to a kind of vector field of motions, such that for every particular state, there are directions that all the parts of you should be moving. I could imagine this being designed to fairly consistently cause O to happen within some context.
The vector field that you wrote about is sometimes called a tuple. You can characterize the “state” of the robot with a tuple, and then specify available transitions from state to state. So (hallway, hands down) → (kitchen, hands up) is allowed, but (kitchen, hands up) → (hallway, hands up) is not. You can even specify a goal as a state, and then the robot can back-chain through its allowed transitions to decide how to go from (living room, hands down) to (docking station, hands down), for example.
This kind of idealized conception of the state of the robot fails in the real world. What if there’s a person in the hallway, for example? A little robot vacuum moving on the floor is something that people can move out of the way of, but an imposing robot with moving arms that does general-purpose tidying poses more danger (and lawsuits).
There’s:
automaton (does the same thing over and over)
remote-controlled robots (follows operator instructions, optionally relays sensor data to the operator)
autonomous robots with sensors (responds to events or environments)
a combination of the above
A CNC machine is not autonomous. A military drone might be. An “out-of-the-office” message service is not autonomous. An AI office assistant might be.
In general designers have concerns like:
should the robot be reprogrammable?
will the robot have an operator? If so, will the operator need sensor data from the robot?
is the environment of operation of the robot completely controlled?
There are analogous concerns for a software agent (for example, one that processes strings of social media posts).
will it respond differently to different inputs?
will it need to retain input information for later use?
do we know all the possible inputs that it might receive?
how do we decide what inputs get what outputs? Does that involve complex calculation or lots of background information?
does the agent perform additional tasks in addition to producing outputs?
Robots, and software agents, offer something special when a task is redefined. A good example in robotics is home construction. There’s flying drones carrying small printers that print features of a house with goo, similar to cement. You couldn’t get a human flying a helicopter to do that, at least not very well. But a swarm of drones? Sure.
The drones are not general-purpose humanoid robots carrying out typical construction tasks with arms gripping bricks and legs carrying the robot and its bricks from brick stacks to brick walls under construction. Alternatively, there could be construction-line robots building prefab parts that are shipped to the building location and that home-owners assemble themselves. However, this means you can’t have a brick house. What if you don’t like a home with cement-goo walls or prefab parts? Well, then costs go up. Or you can accept what task redefinition got for you, weird walls or a prefabbed house.
Phone trees are an example of software agents that took away most of the intelligence required for phone operators. Usually, the human operator steps in when the situation requires handling not programmed into the phone tree. However, the human operator also has to follow a script for some parts of the transaction. Automating the operator’s work dehumanizes them a bit. The phone tree makes the customer work a bit more, and let the support department pay a bit less for employees.
However since such behavior would not be produced by a process optimizing O, you shouldn’t expect it to find new and strange routes to O, or to seek O reliably in novel circumstances.
Yes, but that means any unexpected changes in the environment or context, for example, someone standing in the kitchen when the robot raises its arms or the robot being expected to sous chef for the homeowner’s big meal, is a big deal.
There appears to be zero pressure for this thing to become more coherent, unless its design already involves reflexes to move its thoughts in certain ways that lead it to change itself.
There’s pressure on designers to make systems that can handle an uncertain environment. The pressure toward AGI is in fact to replicate the work of humans but at much higher rates and with much lower costs. It’s the same pressure as drives all other forms of automation. I would just call it greed, at this point, but that’s oversimplifying. A little.
Ironically, this push toward AGI is analogous to a factory owner wanting a humanoid robot on the line doing the same jobs the same way as people already on the line when construction line robots (and the task redefinition that goes with them) are already available. If you look at the current use of robots in factories, there was some task redefinition to allow robots with much less general intelligence to produce the same results as humans. It’s ironic to want a humanoid robot on the line that can work autonomously, receive training from Jill the line manager, and trade jokes with Joe, a fellow (human) employee, when cheap and reliable robot arms will do welding or part assembly or testing.
The list of means to automate (or cheapen or increase throughput of) work is growing:
simple robots
crowd-sourcing
out-sourcing
mechanical turks
expert systems
knowledge-bases
shifting work onto consumers
software agents
Task redefinition is part of it all. So why the emphasis on AGI when there’s so many other ways to automate, cheapen, or increase production?
Seen broadly, a push toward AGI is to cheapen human cognitive abilities and background knowledge so much that replacing humans with software or robots makes sense in all contexts. Not just in factories, but in white collar jobs and service jobs of all kinds, at:
software design houses
art agencies
investment firms
government agencies
legal firms
maid services
engineering firms
construction companies
research organizations
communications companies
A lot of modern work is white collar, either data processing or creative work or communications work. What protects it from automation by conventional means (task redefinition plus current technology) is:
entrenched interests (employees, managers)
low-cost alternatives to career-type employees (crowd-sourcing, flexible contracts, open-source, out-sourcing)
being part of profitable industries (less incentive to raise productivity or reduce costs)
ignorance (available tools are unknown)
comfort (goes with having $$$ and not wanting to threaten that comfort)
time for development (automation tools take time to develop and mature with feedback)
cost barriers (automation is not free and there’s some risk of failure)
human interaction demand (consumers like the emotional experience, common knowledge or common-sense of humans in some roles)
However, if you take a closer look at job stability and career longevity, you’ll see that the tech industry eats away at both with progressive automation. Cannibalizing its own work with automation is normal for it.
I expect you could build a system like this that reliably runs around and tidies your house say, or runs your social media presence, without it containing any impetus to become a more coherent agent (because it doesn’t have any reflexes that lead to pondering self-improvement in this way).
Well, that’s true, systems like you described have no impetus to become a more coherent agent. It’s really when the agent has the resources to improve and the task demands are much greater that the impetus is there.
Does replacing human jobs really require devices with that impetus?
Is it desirable to people with money/power to continue to automate human jobs?
If the answers are both “yes”, then the impetus will have to be there to satisfy the organizations pushing for AGI solutions to their problems.
Hi Katja
You wrote:
The vector field that you wrote about is sometimes called a tuple. You can characterize the “state” of the robot with a tuple, and then specify available transitions from state to state. So (hallway, hands down) → (kitchen, hands up) is allowed, but (kitchen, hands up) → (hallway, hands up) is not. You can even specify a goal as a state, and then the robot can back-chain through its allowed transitions to decide how to go from (living room, hands down) to (docking station, hands down), for example.
This kind of idealized conception of the state of the robot fails in the real world. What if there’s a person in the hallway, for example? A little robot vacuum moving on the floor is something that people can move out of the way of, but an imposing robot with moving arms that does general-purpose tidying poses more danger (and lawsuits).
There’s:
automaton (does the same thing over and over)
remote-controlled robots (follows operator instructions, optionally relays sensor data to the operator)
autonomous robots with sensors (responds to events or environments)
a combination of the above
A CNC machine is not autonomous. A military drone might be.
An “out-of-the-office” message service is not autonomous. An AI office assistant might be.
In general designers have concerns like:
should the robot be reprogrammable?
will the robot have an operator? If so, will the operator need sensor data from the robot?
is the environment of operation of the robot completely controlled?
There are analogous concerns for a software agent (for example, one that processes strings of social media posts).
will it respond differently to different inputs?
will it need to retain input information for later use?
do we know all the possible inputs that it might receive?
how do we decide what inputs get what outputs? Does that involve complex calculation or lots of background information?
does the agent perform additional tasks in addition to producing outputs?
Robots, and software agents, offer something special when a task is redefined. A good example in robotics is home construction. There’s flying drones carrying small printers that print features of a house with goo, similar to cement. You couldn’t get a human flying a helicopter to do that, at least not very well. But a swarm of drones? Sure.
The drones are not general-purpose humanoid robots carrying out typical construction tasks with arms gripping bricks and legs carrying the robot and its bricks from brick stacks to brick walls under construction. Alternatively, there could be construction-line robots building prefab parts that are shipped to the building location and that home-owners assemble themselves. However, this means you can’t have a brick house. What if you don’t like a home with cement-goo walls or prefab parts? Well, then costs go up. Or you can accept what task redefinition got for you, weird walls or a prefabbed house.
Phone trees are an example of software agents that took away most of the intelligence required for phone operators. Usually, the human operator steps in when the situation requires handling not programmed into the phone tree. However, the human operator also has to follow a script for some parts of the transaction. Automating the operator’s work dehumanizes them a bit. The phone tree makes the customer work a bit more, and let the support department pay a bit less for employees.
Yes, but that means any unexpected changes in the environment or context, for example, someone standing in the kitchen when the robot raises its arms or the robot being expected to sous chef for the homeowner’s big meal, is a big deal.
There’s pressure on designers to make systems that can handle an uncertain environment. The pressure toward AGI is in fact to replicate the work of humans but at much higher rates and with much lower costs. It’s the same pressure as drives all other forms of automation. I would just call it greed, at this point, but that’s oversimplifying. A little.
Ironically, this push toward AGI is analogous to a factory owner wanting a humanoid robot on the line doing the same jobs the same way as people already on the line when construction line robots (and the task redefinition that goes with them) are already available. If you look at the current use of robots in factories, there was some task redefinition to allow robots with much less general intelligence to produce the same results as humans. It’s ironic to want a humanoid robot on the line that can work autonomously, receive training from Jill the line manager, and trade jokes with Joe, a fellow (human) employee, when cheap and reliable robot arms will do welding or part assembly or testing.
The list of means to automate (or cheapen or increase throughput of) work is growing:
simple robots
crowd-sourcing
out-sourcing
mechanical turks
expert systems
knowledge-bases
shifting work onto consumers
software agents
Task redefinition is part of it all. So why the emphasis on AGI when there’s so many other ways to automate, cheapen, or increase production?
Seen broadly, a push toward AGI is to cheapen human cognitive abilities and background knowledge so much that replacing humans with software or robots makes sense in all contexts. Not just in factories, but in white collar jobs and service jobs of all kinds, at:
software design houses
art agencies
investment firms
government agencies
legal firms
maid services
engineering firms
construction companies
research organizations
communications companies
A lot of modern work is white collar, either data processing or creative work or communications work. What protects it from automation by conventional means (task redefinition plus current technology) is:
entrenched interests (employees, managers)
low-cost alternatives to career-type employees (crowd-sourcing, flexible contracts, open-source, out-sourcing)
being part of profitable industries (less incentive to raise productivity or reduce costs)
ignorance (available tools are unknown)
comfort (goes with having $$$ and not wanting to threaten that comfort)
time for development (automation tools take time to develop and mature with feedback)
cost barriers (automation is not free and there’s some risk of failure)
human interaction demand (consumers like the emotional experience, common knowledge or common-sense of humans in some roles)
However, if you take a closer look at job stability and career longevity, you’ll see that the tech industry eats away at both with progressive automation. Cannibalizing its own work with automation is normal for it.
Well, that’s true, systems like you described have no impetus to become a more coherent agent. It’s really when the agent has the resources to improve and the task demands are much greater that the impetus is there.
Does replacing human jobs really require devices with that impetus?
Is it desirable to people with money/power to continue to automate human jobs?
If the answers are both “yes”, then the impetus will have to be there to satisfy the organizations pushing for AGI solutions to their problems.