All I can say is that if you are going to have machines which can fulfill all the organizational and tactical responsibilities of humans, to create and lead large formations, then they are probably going to have some kind of general intelligence like humans do. That means we can expect and demand that they have a decent moral compass.
But in this invincible-robot-army scenario, that implies civilians would need to be able to own and deploy LAWs too, either individually (so they can function as aggrieved tyrant-assassins) or collectively (so they can form revolutionary militias against gov’t LAWs).
We don’t have civilian tanks or civilian fighter jets or lots of other things. Revolutions are almost always asymmetric.
All I can say is that if you are going to have machines which can fulfill all the organizational and tactical responsibilities of humans, to create and lead large formations, then they are probably going to have some kind of general intelligence like humans do.
Couldn’t it be the case, though, that you have a number of machines that together fulfill all the organizational and tactical responsibilities of humans without having any one of them have general intelligence? Given that humans already function as cogs in a machine (a point you make very well from your experience), this seems very plausible.
In that case, the intelligence could be fairly narrow, and I would think we should not bet too much on the AIs having a moral compass.
All I can say is that if you are going to have machines which can fulfill all the organizational and tactical responsibilities of humans, to create and lead large formations, then they are probably going to have some kind of general intelligence like humans do. That means we can expect and demand that they have a decent moral compass.
We don’t have civilian tanks or civilian fighter jets or lots of other things. Revolutions are almost always asymmetric.
Couldn’t it be the case, though, that you have a number of machines that together fulfill all the organizational and tactical responsibilities of humans without having any one of them have general intelligence? Given that humans already function as cogs in a machine (a point you make very well from your experience), this seems very plausible.
In that case, the intelligence could be fairly narrow, and I would think we should not bet too much on the AIs having a moral compass.
If they are narrow in focus, then it might be easier to provide ethical guidance over their scope of operations.