I don’t see how using Intelligence (1) as a definition undermines the orthogonality thesis.
Intelligence(1): Intelligence as being able to perform most or all of the cognitive tasks that humans can perform. (See page 22)
This only makes reference to abilities and not to the underlaying motivation. Looking at high functioning sociopaths you might argue we have an example of agents that often perform very well at all most human abilities but still have attitudes towards other people that might be quite different from most people and lack a lot of ordinary inhibitions.
This should become clear if one considers that ‘essentially all human cognitive abilities’ includes such activities as pondering moral dilemmas, reflecting on the meaning of life, analysing and producing sophisticated literature, formulating arguments about what constitutes a ‘good life’, interpreting and writing poetry, forming social connections with others, and critically introspecting upon one’s own goals and desires. To me it seems extraordinarily unlikely that any agent capable of performing all these tasks with a high degree of proficiency would simultaneously stand firm in its conviction that the only goal it had reasons to pursue was tilling the universe with paperclips.
I don’t agree, i personally can easily imagine an agent that can argue for moral positions convincingly by analysing huge amounts of data about human preferences, that can use statistical techniques to infer the behaviour and attitude of humans and then use that knowledge to maximize something like positive affection or trust and many other things.
I don’t see how using Intelligence (1) as a definition undermines the orthogonality thesis.
This only makes reference to abilities and not to the underlaying motivation. Looking at high functioning sociopaths you might argue we have an example of agents that often perform very well at all most human abilities but still have attitudes towards other people that might be quite different from most people and lack a lot of ordinary inhibitions.
I don’t agree, i personally can easily imagine an agent that can argue for moral positions convincingly by analysing huge amounts of data about human preferences, that can use statistical techniques to infer the behaviour and attitude of humans and then use that knowledge to maximize something like positive affection or trust and many other things.