All I can say is that if you are going to have machines which can fulfill all the organizational and tactical responsibilities of humans, to create and lead large formations, then they are probably going to have some kind of general intelligence like humans do.
Couldn’t it be the case, though, that you have a number of machines that together fulfill all the organizational and tactical responsibilities of humans without having any one of them have general intelligence? Given that humans already function as cogs in a machine (a point you make very well from your experience), this seems very plausible.
In that case, the intelligence could be fairly narrow, and I would think we should not bet too much on the AIs having a moral compass.
Couldn’t it be the case, though, that you have a number of machines that together fulfill all the organizational and tactical responsibilities of humans without having any one of them have general intelligence? Given that humans already function as cogs in a machine (a point you make very well from your experience), this seems very plausible.
In that case, the intelligence could be fairly narrow, and I would think we should not bet too much on the AIs having a moral compass.
If they are narrow in focus, then it might be easier to provide ethical guidance over their scope of operations.