According to the CLR, since resource acquisition is an instrumental goal—regardless of the utility function of the AGI - , it is possible that such goal can lead to a race where each AGI can threaten others such that the target has an incentive to hand over resources or comply with the threateners’ demands. Is such a conflict scenario (potentially leading to x-risks) from two AGIs possible if these two AGIs have a different intelligence level? If so, isn’t there a level of intelligence gap at which x-risks become unlikely? How to characterize this function (the probability of the threat being executed with respect to the intelligence gap between the two AGIs)? In other words, the question here is something like: how does the distribution of agent intelligence affect the threat dynamic? Has any work already been done on this?
According to the CLR, since resource acquisition is an instrumental goal—regardless of the utility function of the AGI - , it is possible that such goal can lead to a race where each AGI can threaten others such that the target has an incentive to hand over resources or comply with the threateners’ demands. Is such a conflict scenario (potentially leading to x-risks) from two AGIs possible if these two AGIs have a different intelligence level? If so, isn’t there a level of intelligence gap at which x-risks become unlikely? How to characterize this function (the probability of the threat being executed with respect to the intelligence gap between the two AGIs)? In other words, the question here is something like: how does the distribution of agent intelligence affect the threat dynamic? Has any work already been done on this?