Yeah, that part I’m less sure about, especially since it’s in large part a subset of aligning ai to any goals in the first place. I plan to write a post soon on what makes different values “better” or “worse” than others, maybe we can set up a brainstorming session on that post soon? I think that one will be much more directly applicable to AI moral alignment
Thanks, interesting post, and I wonder how this can relate to AI and the ethical questions around AI vales
Yeah, that part I’m less sure about, especially since it’s in large part a subset of aligning ai to any goals in the first place. I plan to write a post soon on what makes different values “better” or “worse” than others, maybe we can set up a brainstorming session on that post soon? I think that one will be much more directly applicable to AI moral alignment
Sure, I am happy to do that.