Interesting ideas. I think there’s some conceptual overlap with my recent post on ‘biomimetic alignment’, which takes an evolutionary perspective on alignment issues. See here.
Thanks, I appreciate the comment. I hadn’t seen your piece, that’s great though. The difficulty of gene/brain alignment is a good analogy for how unlikely human/AI alignment is on a first try. & I share your scepticism about humans having some general utility function.
Interesting ideas. I think there’s some conceptual overlap with my recent post on ‘biomimetic alignment’, which takes an evolutionary perspective on alignment issues. See here.
Thanks, I appreciate the comment. I hadn’t seen your piece, that’s great though. The difficulty of gene/brain alignment is a good analogy for how unlikely human/AI alignment is on a first try. & I share your scepticism about humans having some general utility function.