Current AIs seem to use the vast, vast majority of their reasoning power for purposes which aren’t directly related to their final applications. I predict this will also apply for internal high level reasoning of AIs. This doesn’t seem true for humans.
In what sense do AIs use their reasoning power in this way? How that that affect whether they will have values that humans like?
In what sense do AIs use their reasoning power in this way? How that that affect whether they will have values that humans like?