Feature request: integrate the content from the EA fora into LessWrong in a similar way as alignmentforum.org
Risks&dangers: I think there is non-negligible chance the LW karma system is damaging the discussion and the community on LW in some subtle but important way.
Implementing the same system here makes the risks correlated.
I do not believe anyone among the development team or moderators really understands how such things influence people on the S1 level—it seems somewhat similar to likes on facebook, and it’s clear likes on Facebook are able to mess up with peoples motivation in important ways. So the general impression is people are playing with something possibly powerful, likely without deep understanding, and possibly with a bad model of what the largest impacts are (focus of ordering of content, vs. subtle impacts on motivation)
In situations with such uncertainty, I would prefer the risks to be less correlated
edit: another feature request: allow to add co-authors of posts. a lot of texts are created by multiple people, ant it would be nice if all the normal functionality worked
Great point. I think it’s really interesting to compare the blog comments on slatestarcodex.com to the reddit comments on /r/slatestarcodex. It’s a relatively good controlled experiment because both communities are attracted by Scott’s writing, and slatestarcodex has a decent amount of overlap with EA. However, the character of the two communities is pretty different IMO. A lot of people avoid the blog comments because “it takes forever to find the good content”. And if you read the blog comments, you can tell that they are written by people with a lot of time on their hands—especially in the open threads. The discussion is a lot more leisurely and people don’t seem nearly as motivated to grab the reader’s interest. The subreddit is a lot more political, maybe because reddit’s voting system facilitates mobbing.
Digital institution design is a very high leverage problem for civilization as a whole, and should probably receive EA attention on those grounds. But maybe it’s a bad idea to use the EA forum as a skunk works?
BTW there is more discussion of the subforums thing here.
I actually believe LW2.0 is doing a pretty good job, and is likely better than reddit.
Just there is a lot of dilemmas implicitly answered in some way, e.g.
total utilitarian or average? total
decay with time or not? no decay
everything adds to one number? yes
show it or hide it? show it
scaling? logarithmic
This likely has some positive effects, and some negative ones. I will not go into speculation about what they are. Just if EAF2.0 is going this direction, I’d prefer the karma system to be sufficiently different from LW. E.g going average utilitarian and not displaying the karma would be different enough (just as an example!)
Also the academic literature on “social influence bias” (paper by Lev Muchnik, Sinan Aral and Sean J. Taylor from 2014 and followups) may be worth attention
Yeah maybe they could just select whatever karma tweaks would require the minimum code changes while still being relatively sane. Or ask the LW2.0 team what their second choice karma implementation would look like and use it for the EA forum.
I really want to highlight the small point that you made in the end:
Digital institution design is a very high leverage problem for civilization as a whole, and should probably receive EA attention on those grounds.
I am personally very interested in this topic and there is a lot of depth to it. It would be awesome if this topic could gain more traction in the EA community as it seems to be one of the most important challenges for the near-to-medium term future. It may receive some conceptual attention in terms of AI alignment and more practical considerations in terms of AI development coordination but it is actually a much broader challenge than that with implications for all areas of (digital) live. If I find the time, I will try to put a comprehensive post on this together. Whoever is also interested in this topic please get in touch with me! (PM or alex{at}herwix.com)
My impression is that the subreddit comments can be longer, more detailed and higher quality than the blog comments. Maybe they are not better on average, but the outliers are far better and more numerous, and the karma sorting means the outliers are the ones that you see first.
Implementing the same system here makes the risks correlated.
The point re correlation of risks is an interesting one — I’ve been modelling the tight coupling of the codebases as a way of reducing overall project risk (from a technical/maintenance perspective), but of course this does mean that we correlate any risks that are a function of the way the codebase itself works.
I’m not sure we’ll do much about that in the immediate term because our first priority should be to keep changes to the parent codebase as minimal as possible while we’re migrating everything from the existing server. However, adapting the forum to the specific needs of the EA community is something we’re definitely thinking about, and your comment highlights that there are good reasons to think that such feature differences have the important additional property of de-correlating the risks.
Feature request: integrate the content from the EA fora into LessWrong in a similar way as alignmentforum.org
That’s unfortunately not going to be possible in the same way. My understanding is that the Alignment Forum beta is essentially running on the same instance (server stack + database) as the LessWrong site, and some posts are just tagged as ‘Alignment Forum’ which makes them show up there. This means it’s easier to do things like have parallel karma scores, shared comments etc.
We see the EA Forum as a distinct entity from LW, and while we’re planning to work very closely with the LW team on this project (especially during the setup phase), we’d prefer to run the EA Forum as a separate, independent project. This also us the affordance to do things differently in the future if desired (e.g. have a different karma system, different homepage layout etc).
I think running it as a separate project from LW is generally good, and prioritizing move to the new system is right.
With the LW integration, ok, even if it is not technically possible to integrate in the alignment forum way, maybe there is some something-in-between way? (Although its probably more a question which to ask on the LW side.)
This forum is currently correlated with the EA subreddit with its conventional counting of votes, and if we went with a like system then it would be correlated with Facebook. I’m not sure what else you could do, aside from having no likes or votes at all, which would clearly be bad because it makes it very hard to find the best content.
I agree that it would be nice if the EA forum was implemented similar to the way Alignment Forum is being done, although since that is itself still in beta maybe the timeline doesn’t permit it right away. Maybe it’s something that could happen later, though?
As to risks with voting and comparison to likes on Facebook, I guess the question would be is it any worse that any system of voting/liking content? If it’s distorting discussions it seems unlikely that the change will be any worse than the existing voting system on this forum since they are structurally similar even if the weighted voting mechanism is new.
The worry is this: Two systems of voting/liking may be “equally good” in the sense that they e.g. incentivize 90% of good comments and disincentivize 10% of good comments, but the overlap of good things they disincentivize may be just 1%.
(This seems plausible given the differences in the mechanism, way how it is displayed, and how it directs attention)
It makes a difference if you are using two different randomly broken system, or two coppies of one.
Feature request: integrate the content from the EA fora into LessWrong in a similar way as alignmentforum.org
Risks&dangers: I think there is non-negligible chance the LW karma system is damaging the discussion and the community on LW in some subtle but important way.
Implementing the same system here makes the risks correlated.
I do not believe anyone among the development team or moderators really understands how such things influence people on the S1 level—it seems somewhat similar to likes on facebook, and it’s clear likes on Facebook are able to mess up with peoples motivation in important ways. So the general impression is people are playing with something possibly powerful, likely without deep understanding, and possibly with a bad model of what the largest impacts are (focus of ordering of content, vs. subtle impacts on motivation)
In situations with such uncertainty, I would prefer the risks to be less correlated
edit: another feature request: allow to add co-authors of posts. a lot of texts are created by multiple people, ant it would be nice if all the normal functionality worked
Great point. I think it’s really interesting to compare the blog comments on slatestarcodex.com to the reddit comments on /r/slatestarcodex. It’s a relatively good controlled experiment because both communities are attracted by Scott’s writing, and slatestarcodex has a decent amount of overlap with EA. However, the character of the two communities is pretty different IMO. A lot of people avoid the blog comments because “it takes forever to find the good content”. And if you read the blog comments, you can tell that they are written by people with a lot of time on their hands—especially in the open threads. The discussion is a lot more leisurely and people don’t seem nearly as motivated to grab the reader’s interest. The subreddit is a lot more political, maybe because reddit’s voting system facilitates mobbing.
Digital institution design is a very high leverage problem for civilization as a whole, and should probably receive EA attention on those grounds. But maybe it’s a bad idea to use the EA forum as a skunk works?
BTW there is more discussion of the subforums thing here.
Good observation with the SSC natural experiment!
I actually believe LW2.0 is doing a pretty good job, and is likely better than reddit.
Just there is a lot of dilemmas implicitly answered in some way, e.g.
total utilitarian or average? total
decay with time or not? no decay
everything adds to one number? yes
show it or hide it? show it
scaling? logarithmic
This likely has some positive effects, and some negative ones. I will not go into speculation about what they are. Just if EAF2.0 is going this direction, I’d prefer the karma system to be sufficiently different from LW. E.g going average utilitarian and not displaying the karma would be different enough (just as an example!)
Also the academic literature on “social influence bias” (paper by Lev Muchnik, Sinan Aral and Sean J. Taylor from 2014 and followups) may be worth attention
Yeah maybe they could just select whatever karma tweaks would require the minimum code changes while still being relatively sane. Or ask the LW2.0 team what their second choice karma implementation would look like and use it for the EA forum.
I really want to highlight the small point that you made in the end:
I am personally very interested in this topic and there is a lot of depth to it. It would be awesome if this topic could gain more traction in the EA community as it seems to be one of the most important challenges for the near-to-medium term future. It may receive some conceptual attention in terms of AI alignment and more practical considerations in terms of AI development coordination but it is actually a much broader challenge than that with implications for all areas of (digital) live. If I find the time, I will try to put a comprehensive post on this together. Whoever is also interested in this topic please get in touch with me! (PM or alex{at}herwix.com)
My impression is that the subreddit comments can be longer, more detailed and higher quality than the blog comments. Maybe they are not better on average, but the outliers are far better and more numerous, and the karma sorting means the outliers are the ones that you see first.
The point re correlation of risks is an interesting one — I’ve been modelling the tight coupling of the codebases as a way of reducing overall project risk (from a technical/maintenance perspective), but of course this does mean that we correlate any risks that are a function of the way the codebase itself works.
I’m not sure we’ll do much about that in the immediate term because our first priority should be to keep changes to the parent codebase as minimal as possible while we’re migrating everything from the existing server. However, adapting the forum to the specific needs of the EA community is something we’re definitely thinking about, and your comment highlights that there are good reasons to think that such feature differences have the important additional property of de-correlating the risks.
That’s unfortunately not going to be possible in the same way. My understanding is that the Alignment Forum beta is essentially running on the same instance (server stack + database) as the LessWrong site, and some posts are just tagged as ‘Alignment Forum’ which makes them show up there. This means it’s easier to do things like have parallel karma scores, shared comments etc.
We see the EA Forum as a distinct entity from LW, and while we’re planning to work very closely with the LW team on this project (especially during the setup phase), we’d prefer to run the EA Forum as a separate, independent project. This also us the affordance to do things differently in the future if desired (e.g. have a different karma system, different homepage layout etc).
Thanks for info!
I think running it as a separate project from LW is generally good, and prioritizing move to the new system is right.
With the LW integration, ok, even if it is not technically possible to integrate in the alignment forum way, maybe there is some something-in-between way? (Although its probably more a question which to ask on the LW side.)
This forum is currently correlated with the EA subreddit with its conventional counting of votes, and if we went with a like system then it would be correlated with Facebook. I’m not sure what else you could do, aside from having no likes or votes at all, which would clearly be bad because it makes it very hard to find the best content.
I agree that it would be nice if the EA forum was implemented similar to the way Alignment Forum is being done, although since that is itself still in beta maybe the timeline doesn’t permit it right away. Maybe it’s something that could happen later, though?
As to risks with voting and comparison to likes on Facebook, I guess the question would be is it any worse that any system of voting/liking content? If it’s distorting discussions it seems unlikely that the change will be any worse than the existing voting system on this forum since they are structurally similar even if the weighted voting mechanism is new.
It’s a different question.
The worry is this: Two systems of voting/liking may be “equally good” in the sense that they e.g. incentivize 90% of good comments and disincentivize 10% of good comments, but the overlap of good things they disincentivize may be just 1%. (This seems plausible given the differences in the mechanism, way how it is displayed, and how it directs attention)
It makes a difference if you are using two different randomly broken system, or two coppies of one.