Regarding “learning to reason from humans”, to what extent do you think having good models of human preferences is a prerequisite for powerful (and dangerous) general intelligence?
Of course, the motivation to act on human preferences is another matter—but I wonder if at least the capability comes by default?
My guess is that the capability is extremely likely, and the main difficulties are motivation and reliability of learning (since in other learning tasks we might be satisfied with lower reliability that gets better over time, but in learning human preferences unreliable learning could result in a lot more harm).
My own 2 cents. It depends a bit what form of general intelligence is made first. There are at least two possible models.
Super intelligent agent with a specified goal
External brain lobe
With the first you need to be able to specify a human preferences in the form of a goal. Which enables it to pick the right actions.
The external brain lobe would start not very powerful and not come with any explicit goals but would be hooked into the human motivational system and develop goals shaped by human preferences.
HRAD is explicitly about the first. I would like both to be explored.
Right, I’m asking how useful or dangerous your (1) could be if it didn’t have very good models of human psychology—and therefore didn’t understand things like “humans don’t want to be killed”.
Great piece, thank you.
Regarding “learning to reason from humans”, to what extent do you think having good models of human preferences is a prerequisite for powerful (and dangerous) general intelligence?
Of course, the motivation to act on human preferences is another matter—but I wonder if at least the capability comes by default?
My guess is that the capability is extremely likely, and the main difficulties are motivation and reliability of learning (since in other learning tasks we might be satisfied with lower reliability that gets better over time, but in learning human preferences unreliable learning could result in a lot more harm).
My own 2 cents. It depends a bit what form of general intelligence is made first. There are at least two possible models.
Super intelligent agent with a specified goal
External brain lobe
With the first you need to be able to specify a human preferences in the form of a goal. Which enables it to pick the right actions.
The external brain lobe would start not very powerful and not come with any explicit goals but would be hooked into the human motivational system and develop goals shaped by human preferences.
HRAD is explicitly about the first. I would like both to be explored.
Right, I’m asking how useful or dangerous your (1) could be if it didn’t have very good models of human psychology—and therefore didn’t understand things like “humans don’t want to be killed”.