I think for many people, positive comments would be much less meaningful if they were rewarded/quantified, because you would doubt that they’re genuine. (Especially if you excessively feel like an imposter and easily seize onto reasons to dismiss praise.)
I disagree with your recommendations despite agreeing that positive comments are undersupplied.
I’d quickly flag: 1. Any decent intervention should be done experimentally. It’s not like there would be “one system, hastily put-together, in place forever.” More like, early work would try out some things and see what the response is like in practice. I imagine that many original ideas would be mediocre, but with the right modifications and adjustments to feedback, it’s possible to make something decent. 2. I think that positive comments are often already rewarded—and that’s a major reason people give them. But I don’t think this is necessarily a bad thing. My quick guess is that this is a situation of adjusting incentives—certain incentive structures would encourage certain classes of good and bad behaviors, so it’s important to continue to tune these. Right now we have some basic incentives that were arrived at by default, and in my opinion are quite unsophisticated (people are incentivized to be extra nice to people who are powerful and who will respond, and mean to people in the outgroup). I think semi-intentional work can improve this, but I realize it would need to be done well.
On my side it feels a bit like, ”We currently have an ecosystem of very mediocre incentives, that produce the current results. It’s possible to set up infrastructure to adjust those incentives and experiment with what those results would be. I’m optimistic that this problem is both important enough and tractable enough for some good efforts to work on.”
I upvoted and didn’t disagree-vote, because I generally agree that using AI to nudge online discourse in more productive directions seems good. But if I had to guess where disagree votes come from, it might be a combination of:
It seems like we probably want politeness-satisficing rather than politeness-maximizing. (This could be consistent with some versions of the mechanism you describe, or a very slightly tweaked version).
There’s a fine line between politiness-moderating and moderating the substance of ideas that make people uncomfortable. Historically, it has been hard to police this line, and given the empirically observable political preferences of LLMs, it’s reasonable for people who don’t share those preferences to worry that this will disadvantage them (though I expect this bias issue to get better over time, possibly very soon)
There is a time and place for spirited moral discourse that is not “polite,” because the targets of the discourse are engaging in highly morally objectionable action, and it would be bad to always discourage people from engaging in such discourse.*
*This is a complicated topic that I don’t claim to have either (a) fully coherent views on, or (b) have always lived up to the views I do endorse.
I find the disagree votes pretty interesting on this, a bit curious to better understand the intuitions there.
I think for many people, positive comments would be much less meaningful if they were rewarded/quantified, because you would doubt that they’re genuine. (Especially if you excessively feel like an imposter and easily seize onto reasons to dismiss praise.)
I disagree with your recommendations despite agreeing that positive comments are undersupplied.
I’d quickly flag:
1. Any decent intervention should be done experimentally. It’s not like there would be “one system, hastily put-together, in place forever.” More like, early work would try out some things and see what the response is like in practice. I imagine that many original ideas would be mediocre, but with the right modifications and adjustments to feedback, it’s possible to make something decent.
2. I think that positive comments are often already rewarded—and that’s a major reason people give them. But I don’t think this is necessarily a bad thing. My quick guess is that this is a situation of adjusting incentives—certain incentive structures would encourage certain classes of good and bad behaviors, so it’s important to continue to tune these. Right now we have some basic incentives that were arrived at by default, and in my opinion are quite unsophisticated (people are incentivized to be extra nice to people who are powerful and who will respond, and mean to people in the outgroup). I think semi-intentional work can improve this, but I realize it would need to be done well.
On my side it feels a bit like,
”We currently have an ecosystem of very mediocre incentives, that produce the current results. It’s possible to set up infrastructure to adjust those incentives and experiment with what those results would be. I’m optimistic that this problem is both important enough and tractable enough for some good efforts to work on.”
I upvoted and didn’t disagree-vote, because I generally agree that using AI to nudge online discourse in more productive directions seems good. But if I had to guess where disagree votes come from, it might be a combination of:
It seems like we probably want politeness-satisficing rather than politeness-maximizing. (This could be consistent with some versions of the mechanism you describe, or a very slightly tweaked version).
There’s a fine line between politiness-moderating and moderating the substance of ideas that make people uncomfortable. Historically, it has been hard to police this line, and given the empirically observable political preferences of LLMs, it’s reasonable for people who don’t share those preferences to worry that this will disadvantage them (though I expect this bias issue to get better over time, possibly very soon)
There is a time and place for spirited moral discourse that is not “polite,” because the targets of the discourse are engaging in highly morally objectionable action, and it would be bad to always discourage people from engaging in such discourse.*
*This is a complicated topic that I don’t claim to have either (a) fully coherent views on, or (b) have always lived up to the views I do endorse.