I had an idea for a new concept in alignment that might allow nuanced and human like goals (if it can be fully developed).
Has anyone explored using neural clusters found by mechanistic interpretability as part of a goal system?
So that you would look for clusters for certain things e.g. happiness or autonomy and have that neural clusters in the goal system. If the system learned over time it could refine that concept.
This was inspired by how human goals seem to have concepts that change over time in them.
I had an idea for a new concept in alignment that might allow nuanced and human like goals (if it can be fully developed).
Has anyone explored using neural clusters found by mechanistic interpretability as part of a goal system?
So that you would look for clusters for certain things e.g. happiness or autonomy and have that neural clusters in the goal system. If the system learned over time it could refine that concept.
This was inspired by how human goals seem to have concepts that change over time in them.