What sort of goals might be common to human level AIs emerging in different contexts for different purposes? Why wouldn’t these AIs (in this situation where they’re developed slowly and in an embedded context) have just as much diversity in goals as humans do? Or is the argument that they, at an incredibly high level, are just going to end up wanting things more similar to other AIs than humans?
What sort of goals might be common to human level AIs emerging in different contexts for different purposes? Why wouldn’t these AIs (in this situation where they’re developed slowly and in an embedded context) have just as much diversity in goals as humans do? Or is the argument that they, at an incredibly high level, are just going to end up wanting things more similar to other AIs than humans?