Thanks for the comment! I feel funny saying this without being the author, but feel like the rest of my comment is a bit cold in tone, so thought it’s appropriate to add this :)
I lean more moral anti-realist but I struggle to see how the concept of “value alignment” and “decision-making quality” are not similarly orthogonal from a moral realist view than an anti-realist view.
Moral realist frame: “The more the institution is intending to do things according to the ‘true moral view’, the more it’s value-aligned.”
“The better the institutions’s decision making process is at predictably leading to what they value, the better their ‘decision-making quality’ is.”
I don’t see why these couldn’t be orthogonal in at least some cases. For example, a terrorist organization could be outstandingly good at producing outstandingly bad outcomes.
Still, it’s true that the “value-aligned” term might not be the best, since some people seem to interpret it as a dog-whistle for “not following EA dogma enough” [link] (I don’t, although might be mistaken). “Altruism” and “Effectiveness”as the x and y axes would suffer from the problem mentioned in the post that it could alienate people coming to work on IIDM from outside the EA community. For the y-axis, ideally I’d like some terms that make it easy to differentiate between beliefs common in EA that are uncontroversial (“let’s value people’s lives the same regardless of where they live”), and beliefs that are more controversial (“x-risk is the key moral priority of our times”).
About the problematicness of ” value-neutral”: I thought the post gave enough space for the belief that institutions might be worse than neutral on average, marking statements implying the opposite as uncertain. For example crux (a) exists in this image to point out that if you disagree with it, you would come to a different conclusion about the effectiveness of (A).
(I’m testing out writing more comments on the EA forum, feel free to say if it was helpful or not! I want to learn to spend less time on these. This took about 30 minutes.)
Thanks for the post and for taking the time! My initial thoughts on trying to parse this are below, I think it will bring mutual understanding further.
You seem to make a distinction between intentions on the y-axis and outcomes on the x-axis. Interesting!
The terrorist example seems to imply that if you want bad outcomes you are not value-aligned (aligned to what? to good outcomes?). They are value-aligned from their own perspective. And “terrorist” is also not a value-neutral term, for example Nelson Mandela was once considered one, which would I think surprise most people now.
If we allow “from their own perspective” then “effectiveness” would do (and “efficiency” to replace the x-axis), but it seems we don’t, and then “altruism” (or perhaps “good”, with less of an explicit tie to EA?) would without the ambiguity “value-aligned” brings on whether or not we do [allow “from their own perspective”].
(As not a moral realist, the option of “better value” is not available, so it seems one would be stuck with “from their own perspective” and calling the effective terrorist value-aligned, or moving to an explicit comparison to EA values, which I was supposing was not the purpose, and seems to be even more off-putting via the mentioned alienating shortcoming in communication.)
Next to value-aligned being suboptimal, which I also just supported further, you seem to agree with altruism and effectiveness (I would now suggest “efficiency” instead) as appropriate labels, but agree with the author about the shortcoming for communicating to certain audiences (alienation), with which I also agree. For other audiences, including myself, the current form perhaps has shortcomings. I would value clarity more, and call the same the same. An intentional opaque-making change of words might additionally come across as deceptive, and as aligned with one’s own ideas of good, but not with such ideas in a broader context. And that I think could definitely also count as / become a consequential shortcoming in communication strategy.
And regarding the non-orthogonality, I was—as a moral realist -more thinking along the lines of: being organized (etc., etc.), is presumably a good value, and it would also improve your decision-making (sort of considered neutrally)...
Thanks for the comment! I feel funny saying this without being the author, but feel like the rest of my comment is a bit cold in tone, so thought it’s appropriate to add this :)
I lean more moral anti-realist but I struggle to see how the concept of “value alignment” and “decision-making quality” are not similarly orthogonal from a moral realist view than an anti-realist view.
Moral realist frame: “The more the institution is intending to do things according to the ‘true moral view’, the more it’s value-aligned.”
“The better the institutions’s decision making process is at predictably leading to what they value, the better their ‘decision-making quality’ is.”
I don’t see why these couldn’t be orthogonal in at least some cases. For example, a terrorist organization could be outstandingly good at producing outstandingly bad outcomes.
Still, it’s true that the “value-aligned” term might not be the best, since some people seem to interpret it as a dog-whistle for “not following EA dogma enough” [link] (I don’t, although might be mistaken). “Altruism” and “Effectiveness”as the x and y axes would suffer from the problem mentioned in the post that it could alienate people coming to work on IIDM from outside the EA community. For the y-axis, ideally I’d like some terms that make it easy to differentiate between beliefs common in EA that are uncontroversial (“let’s value people’s lives the same regardless of where they live”), and beliefs that are more controversial (“x-risk is the key moral priority of our times”).
About the problematicness of ” value-neutral”: I thought the post gave enough space for the belief that institutions might be worse than neutral on average, marking statements implying the opposite as uncertain. For example crux (a) exists in this image to point out that if you disagree with it, you would come to a different conclusion about the effectiveness of (A).
(I’m testing out writing more comments on the EA forum, feel free to say if it was helpful or not! I want to learn to spend less time on these. This took about 30 minutes.)
Thanks for the post and for taking the time! My initial thoughts on trying to parse this are below, I think it will bring mutual understanding further.
You seem to make a distinction between intentions on the y-axis and outcomes on the x-axis. Interesting!
The terrorist example seems to imply that if you want bad outcomes you are not value-aligned (aligned to what? to good outcomes?). They are value-aligned from their own perspective. And “terrorist” is also not a value-neutral term, for example Nelson Mandela was once considered one, which would I think surprise most people now.
If we allow “from their own perspective” then “effectiveness” would do (and “efficiency” to replace the x-axis), but it seems we don’t, and then “altruism” (or perhaps “good”, with less of an explicit tie to EA?) would without the ambiguity “value-aligned” brings on whether or not we do [allow “from their own perspective”].
(As not a moral realist, the option of “better value” is not available, so it seems one would be stuck with “from their own perspective” and calling the effective terrorist value-aligned, or moving to an explicit comparison to EA values, which I was supposing was not the purpose, and seems to be even more off-putting via the mentioned alienating shortcoming in communication.)
Next to value-aligned being suboptimal, which I also just supported further, you seem to agree with altruism and effectiveness (I would now suggest “efficiency” instead) as appropriate labels, but agree with the author about the shortcoming for communicating to certain audiences (alienation), with which I also agree. For other audiences, including myself, the current form perhaps has shortcomings. I would value clarity more, and call the same the same. An intentional opaque-making change of words might additionally come across as deceptive, and as aligned with one’s own ideas of good, but not with such ideas in a broader context. And that I think could definitely also count as / become a consequential shortcoming in communication strategy.
And regarding the non-orthogonality, I was—as a moral realist -more thinking along the lines of: being organized (etc., etc.), is presumably a good value, and it would also improve your decision-making (sort of considered neutrally)...