It seems epistemically dangerous to discourage such value enlightenment as it might prevent ourselves from become more enlighten.
It seems pretty adversarial to manipulate people into not becoming more value enlighten, and allowing this at a norm level seems net negative from most people’s point of view.
But maybe people want to act more altruistically and trusting in a society as also espouse those values. In which case, surface-level values could change in a good way for almost everyone without any fundamental value drift. Which is also a useful phenomenon to study, so probably fine to also call this ‘value drift’.
I’m a bit confused by why you made your first two points in response to my comment. Did you perceive me to be endorsing discouraging reflection on values, or endorsing “manipulating people” into not reflecting on their values and shifting their surface-level values in light of that?
I didn’t aim to endorse those things with my comment; merely to point out that it seems reasonable to me to call something a shift in values even if it’s not a shift in “fundamental values”.
(I also don’t think there’s a sharp distinction between fundamental values and surface-level values. But I think a fuzzy distinction like that can be useful, as can a distinction between moral advocacy focusing on encouraging reflection vs pushing people in one direction, and a distinction between central and peripheral routes for persuasion.
That said, I also think the word “manipulating” is probably not useful here; it’s very charged with connotations, so I’d prefer to talk about the actual behaviours in question, which may or may not actually be objectionable.)
Ok yeah, my explanations didn’t make the connection clear. I’ll elaborate.
I have the impression “drift” has the connotation of uncontrolled, and therefore undesirable change. It has a negative connotation. People don’t want to value drift. If you call rational surface-value update “value drift”, it could confuse people, and make them less prone to make those updates.
If you only use ‘value drift’ only to refer to EA-value drift, it also sneaks in an implication that other value changes are not “drifts”. Language shapes our thoughts, so this usage could modify one’s model of the world in such a way that they are more likely to become more EA than they value.
I should have been more careful about implying certain intentions from you in my previous comment though. But I think some EAs have this intention. And I think using the word that way has this consequence whether or not that’s the intent.
Other thoughts:
It seems epistemically dangerous to discourage such value enlightenment as it might prevent ourselves from become more enlighten.
It seems pretty adversarial to manipulate people into not becoming more value enlighten, and allowing this at a norm level seems net negative from most people’s point of view.
But maybe people want to act more altruistically and trusting in a society as also espouse those values. In which case, surface-level values could change in a good way for almost everyone without any fundamental value drift. Which is also a useful phenomenon to study, so probably fine to also call this ‘value drift’.
I’m a bit confused by why you made your first two points in response to my comment. Did you perceive me to be endorsing discouraging reflection on values, or endorsing “manipulating people” into not reflecting on their values and shifting their surface-level values in light of that?
I didn’t aim to endorse those things with my comment; merely to point out that it seems reasonable to me to call something a shift in values even if it’s not a shift in “fundamental values”.
(I also don’t think there’s a sharp distinction between fundamental values and surface-level values. But I think a fuzzy distinction like that can be useful, as can a distinction between moral advocacy focusing on encouraging reflection vs pushing people in one direction, and a distinction between central and peripheral routes for persuasion.
That said, I also think the word “manipulating” is probably not useful here; it’s very charged with connotations, so I’d prefer to talk about the actual behaviours in question, which may or may not actually be objectionable.)
Ok yeah, my explanations didn’t make the connection clear. I’ll elaborate.
I have the impression “drift” has the connotation of uncontrolled, and therefore undesirable change. It has a negative connotation. People don’t want to value drift. If you call rational surface-value update “value drift”, it could confuse people, and make them less prone to make those updates.
If you only use ‘value drift’ only to refer to EA-value drift, it also sneaks in an implication that other value changes are not “drifts”. Language shapes our thoughts, so this usage could modify one’s model of the world in such a way that they are more likely to become more EA than they value.
I should have been more careful about implying certain intentions from you in my previous comment though. But I think some EAs have this intention. And I think using the word that way has this consequence whether or not that’s the intent.