Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
How do you think we can align humans with their own best values? is it more of a outside of the AI space societal work or is it also tied to the AI space?
I think they are both connected and we should work on both, and we really need a new story as a species for us to be able to rise up to this challenge.
Thank you so much for engaging with the post — I really appreciate your thoughtful comment.
You’re absolutely right: this is a deeply interconnected issue. Aligning humans with their own best values isn’t separate from the AI alignment agenda — it’s part of the same challenge. I see it as a complex socio-technical problem that spans both cultural evolution and technological design.
On one side, we face deeply ingrained psychological and societal dynamics — present bias, moral licensing, systemic incentives. On the other, we’re building AI systems that increasingly shape those very dynamics: they mediate what we see, amplify certain behaviors, and normalize patterns of interaction.
So I believe we need to work in parallel:
On the AI side, to ensure systems are not naïvely trained on our contradictions, but instead scaffold better ethical reasoning.
On the human side, to address the root misalignments within ourselves — through education, norm-shaping, institutional design, and narrative work.
I also resonate with your point about needing a new story — a shared narrative that can unify these efforts and help us rise to the moment. It’s a huge challenge, and I don’t pretend to have all the answers, but I’ve been exploring directions and would love to share more concrete ideas with the community soon.