Thank you for this deep and thought-provoking post! The concept of the “power-ethics gap” truly resonates and seems critically important for understanding current and future challenges, especially in the context of AI.
The analogy with the car, where power is speed and ethics is the driver’s skill, is simply brilliant. It illustrates the core of the problem very clearly. I would even venture to add that, in my view, the “driver’s skill” today isn’t just lagging behind, but perhaps even degrading in some aspects due to the growing complexity of the world, information noise, and polarization. Our collective ability to make wise decisions seems increasingly fragile, despite the growth of individual knowledge.
Your emphasis on the need to shift the focus in AI safety from purely technical aspects of control (power) to deep ethical questions and “value selection” seems absolutely timely and necessary. This truly is an area that appears to receive disproportionately little attention compared to its significance.
The concepts you’ve introduced, especially the distinction between Human-Centric and Sentientkind Alignment, as well as the idea of “Human Alignment,” are very interesting. The latter seems particularly provocative and important. Although you mention that this might fall outside the scope of traditional AI safety, don’t you think that without significant progress here, attempts to “align AI” might end up being built on very shaky ground? Can we really expect to create ethical AI if we, as a species, are struggling with our own “power-ethics gap”?
It would be interesting to hear more thoughts on how the concept of “Moral Alignment” relates to existing frameworks and whether it could help integrate these disparate but interconnected problems under one umbrella.
The post raises many important questions and introduces useful conceptual distinctions. Looking forward to hearing the opinions of other participants! Thanks again for the food for thought!
Thank you so much for engaging with the post — I really appreciate your thoughtful comment.
You’re absolutely right: this is a deeply interconnected issue. Aligning humans with their own best values isn’t separate from the AI alignment agenda — it’s part of the same challenge. I see it as a complex socio-technical problem that spans both cultural evolution and technological design.
On one side, we face deeply ingrained psychological and societal dynamics — present bias, moral licensing, systemic incentives. On the other, we’re building AI systems that increasingly shape those very dynamics: they mediate what we see, amplify certain behaviors, and normalize patterns of interaction.
So I believe we need to work in parallel:
On the AI side, to ensure systems are not naïvely trained on our contradictions, but instead scaffold better ethical reasoning.
On the human side, to address the root misalignments within ourselves — through education, norm-shaping, institutional design, and narrative work.
I also resonate with your point about needing a new story — a shared narrative that can unify these efforts and help us rise to the moment. It’s a huge challenge, and I don’t pretend to have all the answers, but I’ve been exploring directions and would love to share more concrete ideas with the community soon.