Thank you for this interesting overview of Vincent Müller’s arguments! I fully agree that implementation (policy means) often becomes the bottleneck. However, if we systematically reward behavior that contradicts our declared principles, then any “ethical goals” will inevitably be vulnerable to being undermined during implementation. In my own post, I call this the “bad parent” problem: we say one thing, but demonstrate another. Do you think it’s possible to achieve robust adherence to ethical principles in AI when society itself remains fundamentally inconsistent?
Thank you for this interesting overview of Vincent Müller’s arguments! I fully agree that implementation (policy means) often becomes the bottleneck. However, if we systematically reward behavior that contradicts our declared principles, then any “ethical goals” will inevitably be vulnerable to being undermined during implementation. In my own post, I call this the “bad parent” problem: we say one thing, but demonstrate another. Do you think it’s possible to achieve robust adherence to ethical principles in AI when society itself remains fundamentally inconsistent?