I think this issue probably reflects the biggest cleavage between the rationalist community and the effective altruist community, to the extent the groups can really be separated. From a rationalist point of view, the truth is the most important thing, so virtue signaling is bad because it’s (suspected to be) dishonest. From an EA point of view, doing the most good is the most important thing, so socially-motivated virtue signaling is defensible if it consequentially results in more good.
Obviously this is an extreme oversimplification and I’m sure there are people in both communities who wouldn’t agree with the positions I’ve assigned them, but I would guess that as a general heuristic it is more accurate than not.
From an EA point of view, doing the most good is the most important thing, so socially-motivated virtue signaling is defensible if it consequentially results in more good.
EAs may be more likely to think this, but this is not what I’m saying. I’m saying there is real information value in signals of genuine virtue and we can’t afford to leave that information on the table. I think it’s prosocial to monitor your own virtue and offer proof of trustworthiness (and other specific virtues) to others, not because fake signals somehow add up to good social consequences, but because it helps people to be more virtuous.
Rationalists are erring so far in the direction of avoiding false or manipulative signals that they are operating in the dark, when at the same time they are advocating more and more opaque and uncertain ways to have impact. I think that by ignoring virtue and rejecting virtue signals, rationalists are not treating the truth as “the most important thing”. (In fact I think this whole orientation is a meta-virtue-signal that they don’t need validation and they don’t conform—which is a real virtue, but I think is getting in the way of more important info.) It’s contradicting our values of truth and evidence-seeking not to get what information we can about character, at least own own characters.
I just want to reiterate, I am not advocating doing something insincere for social benefit. I’m advocating getting and giving real data about character.
From a rationalist point of view, the truth is the most important thing, so virtue signaling is bad because it’s (suspected to be) dishonest
It’s a good way of framing it (if by “rationalist” you mean something like the avg member of LW). I think the problem in this description is that we often emphasize so much the need of being aware of one’s own biases that we picture ourselves as “lonely reasoners”—neglecting, e.g., the frequent necessity to communicate one is something like a reliable cooperator.
Yeah. Another piece of this that I didn’t fully articulate before is that I think the “honesty” of virtue signaling is very often hard to pin down. I get why people have a visceral and negative reaction to virtue signaling when it’s cynically and transparently being used as a justification or distraction for doing things that are not virtuous at all, and it’s not hard to find examples of people doing this in practice. Even in that scenario, though, I think it’s a mistake to focus on the virtue signaling itself rather than the not-virtuous actions/intentions as the main problem. Like, if you have an agent with few or no moral boundaries who wants to do a selfish thing, why should we be surprised that they’re willing to be manipulative in the course of doing that?
I think cases like these are pretty exceptional though, as are cases when someone is using virtue signaling to express profound and stable convictions. I suspect it’s much more often the case that virtue signaling occupies a sort of ambiguous space where it might not be completely authentic but does at least partly reflect some aspiration towards goodness, on the part of either the person doing it or the community they’re a part of, that is authentic. And I think that aspiration is really important on a community level, or at least any community that I’d want to be a part of, and virtue signaling in practice plays an important role in keeping that alive.
Anyway, since “virtue” is in the eye of the beholder, it would be pretty easy to say that rationalists define “truth-seeking” as virtue and that there’s a whole lot of virtue-signaling on LessWrong around that (see: epistemic status disclaimers, “I’m surprised to hear you say that,” “I’d be happy to accept a bet on this at x:y odds,” etc.)
Even in that scenario, though, I think it’s a mistake to focus on the virtue signaling itself rather than the not-virtuous actions/intentions as the main problem. Like, if you have an agent with few or no moral boundaries who wants to do a selfish thing, why should we be surprised that they’re willing to be manipulative in the course of doing that?
If you think of virtue signalling as a really important coordination mechanism, then abusing that system is additionally very bad on top of the object-level bad thing.
I think this issue probably reflects the biggest cleavage between the rationalist community and the effective altruist community, to the extent the groups can really be separated. From a rationalist point of view, the truth is the most important thing, so virtue signaling is bad because it’s (suspected to be) dishonest. From an EA point of view, doing the most good is the most important thing, so socially-motivated virtue signaling is defensible if it consequentially results in more good.
Obviously this is an extreme oversimplification and I’m sure there are people in both communities who wouldn’t agree with the positions I’ve assigned them, but I would guess that as a general heuristic it is more accurate than not.
lol, see the version of this on less wrong to have your characterization of the rationalist community confirmed: https://www.lesswrong.com/posts/hpebyswwhiSA4u25A/virtue-signaling-is-sometimes-the-best-or-the-only-metric-we
EAs may be more likely to think this, but this is not what I’m saying. I’m saying there is real information value in signals of genuine virtue and we can’t afford to leave that information on the table. I think it’s prosocial to monitor your own virtue and offer proof of trustworthiness (and other specific virtues) to others, not because fake signals somehow add up to good social consequences, but because it helps people to be more virtuous.
Rationalists are erring so far in the direction of avoiding false or manipulative signals that they are operating in the dark, when at the same time they are advocating more and more opaque and uncertain ways to have impact. I think that by ignoring virtue and rejecting virtue signals, rationalists are not treating the truth as “the most important thing”. (In fact I think this whole orientation is a meta-virtue-signal that they don’t need validation and they don’t conform—which is a real virtue, but I think is getting in the way of more important info.) It’s contradicting our values of truth and evidence-seeking not to get what information we can about character, at least own own characters.
I just want to reiterate, I am not advocating doing something insincere for social benefit. I’m advocating getting and giving real data about character.
It’s a good way of framing it (if by “rationalist” you mean something like the avg member of LW). I think the problem in this description is that we often emphasize so much the need of being aware of one’s own biases that we picture ourselves as “lonely reasoners”—neglecting, e.g., the frequent necessity to communicate one is something like a reliable cooperator.
Yeah. Another piece of this that I didn’t fully articulate before is that I think the “honesty” of virtue signaling is very often hard to pin down. I get why people have a visceral and negative reaction to virtue signaling when it’s cynically and transparently being used as a justification or distraction for doing things that are not virtuous at all, and it’s not hard to find examples of people doing this in practice. Even in that scenario, though, I think it’s a mistake to focus on the virtue signaling itself rather than the not-virtuous actions/intentions as the main problem. Like, if you have an agent with few or no moral boundaries who wants to do a selfish thing, why should we be surprised that they’re willing to be manipulative in the course of doing that?
I think cases like these are pretty exceptional though, as are cases when someone is using virtue signaling to express profound and stable convictions. I suspect it’s much more often the case that virtue signaling occupies a sort of ambiguous space where it might not be completely authentic but does at least partly reflect some aspiration towards goodness, on the part of either the person doing it or the community they’re a part of, that is authentic. And I think that aspiration is really important on a community level, or at least any community that I’d want to be a part of, and virtue signaling in practice plays an important role in keeping that alive.
Anyway, since “virtue” is in the eye of the beholder, it would be pretty easy to say that rationalists define “truth-seeking” as virtue and that there’s a whole lot of virtue-signaling on LessWrong around that (see: epistemic status disclaimers, “I’m surprised to hear you say that,” “I’d be happy to accept a bet on this at x:y odds,” etc.)
If you think of virtue signalling as a really important coordination mechanism, then abusing that system is additionally very bad on top of the object-level bad thing.