You’re not the first person to notice this issue, and they didn’t get a satisfying answer either, in my opinion. It seems like a holdover from early days of AI theorizing before we understood the power of machine learning/ evolutionary algorithm techniques. I personally find it highly unlikely that we’ll end up with single minded consequentialist goal function maximisers, it seems like a difficult thing to program with machine learning techniques and one that would be unstable even if you could build it.
You’re not the first person to notice this issue, and they didn’t get a satisfying answer either, in my opinion. It seems like a holdover from early days of AI theorizing before we understood the power of machine learning/ evolutionary algorithm techniques. I personally find it highly unlikely that we’ll end up with single minded consequentialist goal function maximisers, it seems like a difficult thing to program with machine learning techniques and one that would be unstable even if you could build it.