EA Forum 2.0 Initial Announcement
At the request of Ryan Carey, who set up the EA Forum, CEA took over its running last year, as announced in this post. Here we give an update on what we plan to do with it. These are our initial thoughts—we’d welcome feedback on how they could be improved.
In summary, the technology of the current version of the Forum will reach end of life this calendar year. We plan to transfer to a new Forum at forum.effectivealtruism.org, using the codebase of the new LessWrong.com (for more detail on how we made this decision please see the ‘Technical Background’ section below). Content, user accounts and karma from the old Forum will be transferred, and old links redirected as needed.
The move will lead to some changes in how the Forum appears:
We will have a “community” subforum, which we are planning to reserve for organizational updates and discussion of community issues and resources (although we haven’t yet decided on the distinction).
In contrast to the current system where posts are displayed in reverse chronological order, top-rated posts will appear higher on the page. We expect this will make it easier for readers to see the best-quality content quickly.
It will be possible to view content in ordered collections (‘sequences’).
Users will have profiles, where they can display information about their interests.
The Karma system and frontpage design will also be different.
We are also using this opportunity to improve moderation and content on the Forum. We will be working with some of the top forum posters and thinkers in the community to encourage them to post more frequently to the Forum, and we will be cross-posting high quality content from elsewhere. The amount of effort put into moderation will be increased, aiming to give detailed feedback on what content is useful.
In this post, we aim to explain why we think we should make these changes, set out what this will mean for Forum users, and seek your feedback on our plans.
Aims
Our vision is that the Forum becomes the main hub for content and discussion in the community. We’d like most of the thinking that the community does to go on the Forum, with all of EA’s top researchers posting and commenting regularly.
We think that this has several benefits for the community:
More content accessible: Many people in the core of the community have unpublished rough notes on topics, or interesting ideas that aren’t suitable for e.g. academic publication. They may not want the overhead associated with running their own blog, or may want more feedback than they would get on their own blog. If they know that posting to the Forum will reliably get them credit, and useful comments, they’re more likely to post there.
Easier to find content: The Forum’s search function should turn up most of the relevant thinking on a topic, whereas currently much is scattered on personal blogs. Similarly, people looking to get their daily fix of EA content should be able to find it on the Forum, rather than scattered on blogs.
More engagement: If more people are posting, more will be reading and commenting. We hope that this means that good posts get more engagement than they have in the past year or so of the Forum. Ideally, it becomes a place of lively debate between different perspectives within the community.
A single point of discussion: If more discussion is concentrated in one place, we hope it will be easier to keep the community synced up, and learning from each other.
To this end, our first goal is to ensure that the Forum is used by the top content producers in EA.
If we achieve this first goal, then we think that the Forum will be an excellent way to onboard newer people to the community, because: they will be exposed to high quality, fresh content; they will then have the opportunity to engage with that content and contribute themselves; and they will get high-quality feedback on what content of theirs is especially good, allowing them to become familiar with EA topics and community discussion norms.
How will the EA Forum change?
New features on initial release
Besides the updated design of the platform, in the initial release we expect users to benefit from the following features:
Sorting by popularity/engagement rather than only time order: The current Forum presents posts to users only based on time order, with the most recent posts appearing at the top of the page. The new front page will use a slightly modified version of the Hacker News algorithm to order posts by number of upvotes and amount of engagement. This should ensure that popular posts won’t be crowded out simply due to new posts being made. We expect this to improve the experience of reading the Forum, make it easier for users to find the best content and reduce the cost to posting additional content to the Forum. Posts about less-general interest topics will be visible to those who seek them out, but will not appear on the front page unless they attract upvotes. We hope that this will encourage more people to post thoughts and works-in-progress to the Forum for feedback.
A community subforum: In addition to the main Forum, we will have a subforum which we are tentatively planning to call “Community”. We expect to use this category for discussion about issues in the EA community, organizational updates and meta discussion about the Forum itself.
Subscribing to users: It will be possible to subscribe to particular users’ contributions. Each user’s page also functions as a kind of personal blog, with a space to write about their interests and approach to problems. This should help users personalize their content feed over time to be better focused on areas they are interested in.
Sequences: A sequence is a set of linked articles on a particular topic. Any user will be able to create sequences, whether this be from their own original content, or as a way to collect their favorite pieces on a topic. Highly upvoted, user-created sequences will be discoverable by other users. We hope that this will help posters develop ideas in more depth, while keeping individual posts to a readable length, and that users stop having to post multiple, numbered parts to longer posts. It should also be a neat way to present canonical content to new users. You can see some examples of sequences on LW.
A reworked karma system: We will be aiming to transfer user accounts along with karma from the old to the new site in time for release. Also, the way the karma system will function will be slightly different, mainly by giving a slightly higher weight to up- and down-votes from users who themselves have more karma. There will also be the possibility to choose between giving a small up/downvote or giving a larger boost by holding the up/downvote button.
The best summary of how this works is that:
Normal votes (one click) will be worth
3 points – if you have 25,000 karma or more
2 points – if you have 1,000 karma (currently no EA Forum user is above the 2-point level)
1 point – if you have 0 karma
Strong Votes (click and hold) will be worth
16 points (maximum) – if you have 500,000 karma
15 points – 250,000
14 points – 175,000
13 points – 100,000
12 points – 75,000
11 points – 50,000
10 points – 25,000
9 points – 10,000
8 points – 5,000 (Currently no EA Forum user is above the 8-point level)
7 points – 2,500
6 points – 1,000
5 points – 500
4 points – 250
3 points – 100
2 points – 10
1 point – 0
We think that the EA community should aspire to particularly high standards of discussion, and that the best way to maintain those standards is to give those who have contributed to the community more ability to signal what content is good for the Forum than someone who has just created an account. However, we obviously don’t want this to become a tyranny of a few users. There are several users, holding very different viewpoints, who currently have high karma on the Forum, and we hope that this will help maintain a varied discussion, while still ensuring that the Forum has strong discussion standards.
The distinction between normal and strong votes seems useful because it helps to differentiate between posts and comments which are “good”, and those that are excellent, and should be read by others. You can read more at this post.
Support for Markdown and LaTeX: Will make it easier for people to format posts nicely, and include formulae where relevant.
Moving to forum.effectivealtruism.org: We plan to move the Forum to a new domain – forum.effectivealtruism.org – to avoid the confusing proliferation of domains. We’ll also try to bring some of the branding into alignment with effectivealtruism.org. We’ll make sure that we follow good practices for redirecting links etc. to ensure that Forum content is still easily searchable using Google.
Possible features after initial release
After we have successfully released the initial version of the Forum, we expect to continue development on some or all of the below features. These are just preliminary ideas, and we are going to be responding to feedback to determine how to best prioritize among them, or what other ideas we should put effort into.
Different landing pages for new users vs old hands: We are hoping that the new EA Forum will eventually become a good place to direct people relatively new to the community who would like to find out more. To this end we will be able to have the landing page for non-signed in users show a collection of core content, and then only once a user creates an account will they see more recent discussions. Hopefully this will encourage new users to get up to speed with the basics before engaging with new (possibly more advanced/weirder) content.
Additional subforums: (As requested.) In the initial release we will have one main forum with a “Community” subforum. It may be possible in the longer-term to add more subforums, though we would want to ensure that there is enough content in each potential new category to justify this. An alternative solution may be to implement a tag system. Our work on this will also be determined by what features the LessWrong team ends up implementing in this space.
Page for local groups and events: In the future we may want to add a page devoted to local groups and events, like LessWrong’s community page. We will determine whether adding this functionality makes sense after release.
Community-sourced ideas: We would like to hear from the users of the new Forum to help shape development priorities. We believe that the best ideas are often missed when making an initial project plan, but are likely to emerge from users once they are actually using a platform. As such we will be maintaining open communication with users in order to make sure we capitalize on high expected value improvements.
Moderation and curation
More active moderation: Recently, moderation on the Forum has mostly been about removing spam. We think that a better goal for moderation is to improve the quality of thinking and debate on the Forum. Therefore, we think that it might be useful to have a more active, positive form of moderation. In particular, we are considering investing moderator time in giving detailed feedback on posts and comments. We aim to draw up some more detailed plans for moderation before the release of the Forum, and we will post these plans to the current Forum for feedback, and to the new Forum for transparency. To facilitate this more time-intensive moderation, we plan to bring on Max Dalton as a fourth moderator (in addition to Howie Lempel, Julia Wise, and Denise Melchin) and we will consider paying moderators for the time they spend on this.
Creating more link posts: Currently we are hesitant about allowing lots of link posts on the Forum, because they displace original content. Because the new Forum will by-default sort by engagement, we are less worried about this effect. Additionally, we hope that link posts can help to consolidate content to reach our vision of the Forum as a central hub for content in the community.
Encouraging top thinkers to post more: We plan to engage with EA researchers and leaders and encourage them to post more frequently.
Curating and choosing sequences: We plan to curate sequences of posts on a variety of topics from existing Forum content. Users are also free to curate their own sequences, and vote on which sequences are best. Moderators can choose which sequences to promote (e.g. on the front page for non-logged-in users). We will consult with moderators and our advisory committee on this, and we also welcome ideas and feedback in the comments. One sequence that we think would be well suited to this role is CEA’s EA Handbook (note that this will soon have updated contents). We encourage you to create additional and alternative sequences.
Consulting with related forums: We will consult the moderators of LessWrong, the EA subreddit, and the main EA Facebook group about how we should relate to these different spaces. By default, we will keep the relationships as they were previously (for example in which types of content are cross-posted to which spaces), but there might be improvements that can be made.
Possible future plans: In the future, we may experiment with giving prizes for good content (along the lines of the AI alignment prize).
Technical Background
Since CEA took responsibility for the Forum, we have been thinking about how best to improve the experience for users and what should come next for the project. We were limited, however, in our ability to make improvements and changes to the current version, due to our team’s unfamiliarity with the codebase.
Additionally, it was becoming clear that the current version of the software was nearing the end of its lifespan, and Trike Apps (the Forum’s current hosting providers) confirmed that they would be forced to stop their hosting and maintenance services at the end of 2018.
As such, we have since been looking into the best options for building a new version of the service. We were considering four main approaches over the first half of 2018:
-
Use an off-the-shelf discussion software platform, and customize it for the needs of our community;
-
Spend time getting familiar with the codebase/stack that runs the existing Forum, patching/upgrading the software ourselves or porting the installation to a new server;
-
Use the LessWrong 2.0 codebase, and customise that; or
-
Build a solution from scratch, in-house.
After evaluating the above options we think that using the LessWrong 2.0 (LW2) codebase would be the best way forward.
We decided on this approach because we were not able to find an off-the-shelf solution as well targeted to our use-case as LW2. Their team has spent the past year thinking about how to create the best discussion platform for a community adjacent to ours, with very similar goals of enabling intellectual progress on important questions and helping create common knowledge in a (mostly) online community.
We evaluated the costs of simply sticking with the existing codebase. However, we were cautioned against this by Trike Apps, who believe that the setup is fairly brittle, and that it would be difficult to get running on a different server environment. Given that LW2 faced similar problems, and made the decision to develop a new, modern webapp, we believed it made sense to follow their lead.
In this light, building a solution from scratch also seemed a less attractive option, as we would likely have duplicated a lot of the LessWrong team’s work, and it was plausible that our version would have converged on similar solutions to issues we were hoping to solve.
There are some downsides to this approach. In particular, there is a risk in using software that is relatively new, and does not have either a) a wide base of open-source contributors or b) a large commercial software company backing it. However,
The LessWrong team has proven itself to be capable at delivering a product;
The code has been tested in production for nearly a year;
The team plans to continue full-time development and support on the product;
The software is fairly modular and is built on top of well-documented open-source components (Meteor, Vulcan, React, GraphQL); and
The team is located very close to CEA’s development team (in Berkeley), and plans to provide setup advice and ongoing support.
As such, we believe this is the right call, all things considered. CEA will have a formal support arrangement with them.
LW1 was hosted in a very similar setup (and in fact the impending End Of Life that will affect the EA Forum would have also affected the previous version of LessWrong). Consequently, the team has experience migrating data from a very similar system. This means that we can be more confident that the existing Forum data can be safely ported, that we will not need to maintain separate archives for older content, and that we will avoid broken links and other annoyances.
We plan to maintain the codebase as a fork of LW2, with the LW2 codebase as an upstream repo, and prefer submitting PRs upstream to making local changes. We’ll make minimal changes to the site’s theming to make it more similar to effectivealtruism.org, but otherwise attempt to run the codebase as closely as possible to LW2. This ensures that most of the maintenance stays with a development team that is familiar with the codebase, minimizes the risk that CEA will be unable to maintain the forum in the future, and means that any improvements to one codebase will be shared by the other.
Potential risks or downsides
Karma is more “elitist”: The new karma system gives greater weight to the votes of those with high karma. We think that this will overall improve the karma system: to maintain high discussion standards, it is useful to give top contributors a bigger say in determining what content is good than someone who has just created an account. Current high-karma users represent a variety of different worldviews.
SEO disadvantages: While we will make every effort to ensure we follow good practices for moving a website to a new domain, there’s some chance that the domain change could cause us to lose PageRank (though it might also increase due to being associated with effectivealtruism.org, which is highly ranked).
Too many newcomers: If the Forum does increase its rank in searches, or if more people link to the forum, there might be an influx of newcomers, leading to an “eternal september” effect.
We want to ensure that the Forum remains a place for more advanced discussion. Therefore at least initially, we will not link to the Forum prominently. Once the standards of discussion are solid, we will gradually begin to link to the Forum more prominently: karma and moderation should ensure that the best content is highlighted, and lower-quality comments lose prominence. We will monitor how this changes discussion on the Forum. However, we think that exposing newcomers to high quality content, allowing them to contribute, and giving them feedback on their contributions could be an excellent way for them to improve their understanding, and become more involved in the community. We think that the benefits of this way of onboarding new community members will likely outweigh the costs.
Availability of older content: Existing Forum posts and comments will continue to be available. It will still be possible to comment on old posts.
Broken links: We will ensure that links to old Forum posts and comments redirect to the appropriate places on the updated site.
Sequence selection and moderation: CEA will have two staff as moderators of the Forum, which will give us more control over which posts and sequences are promoted (e.g. on the front page). Although CEA has a view on which causes we should prioritize, we recognize that the EA Forum is a community space that should reflect the community. We will moderate based on agreed standards, which will be focused on ensuring good discussion rather than any particular conclusions. We will publish our moderation and curation plans for feedback, and we encourage you to hold us to those public standards.
What comes next
We are already testing a closed beta among CEA staff, to give CEA’s development team time to iron out bugs and familiarize themselves with the codebase, and the moderation team to experiment with the new moderation features.
In mid-August, we hope to have a semi-public beta of top Forum contributors, which we will gradually open to more users. Around this time, we aim to share our detailed moderation plans in order to seek feedback.
If all goes well, we hope to lock the current Forum (i.e. prevent new posts), and switch to the new Forum by late August/early September. Some time after that, all old content and karma will be transferred to the new domain.
Longer-term, besides the above-mentioned subforums and local groups sections, we are still considering a Single Sign On system with EffectiveAltruism.org (currently used by EA Funds and the Giving What We Can pledge), more design changes, and working with the LessWrong team to implement new features that both codebases can benefit from, or (if appropriate) building new features specific to the EA Forum.
Importantly, we want Forum users to have a say in how this develops. We’d welcome pointers to considerations we’ve missed, or feedback on any part of our plans.
Thanks,
Marek, Max, Sam, Julia & JP
- Revisiting the karma system by 29 May 2022 14:19 UTC; 61 points) (
- Problems with EA representativeness and how to solve it by 3 Aug 2018 19:25 UTC; 60 points) (
- LW Update 2018-07-27 – Sharing Drafts by 28 Jul 2018 2:54 UTC; 31 points) (LessWrong;
- Additional plans for the new EA Forum by 7 Sep 2018 15:35 UTC; 20 points) (
- 8 Jul 2018 20:24 UTC; 18 points) 's comment on Open Thread #40 by (
- Open beta of the new EA Forum now available by 18 Oct 2018 2:59 UTC; 12 points) (
- 11 Sep 2018 7:32 UTC; 3 points) 's comment on Near-Term Effective Altruism Discord by (
Feature request: integrate the content from the EA fora into LessWrong in a similar way as alignmentforum.org
Risks&dangers: I think there is non-negligible chance the LW karma system is damaging the discussion and the community on LW in some subtle but important way.
Implementing the same system here makes the risks correlated.
I do not believe anyone among the development team or moderators really understands how such things influence people on the S1 level—it seems somewhat similar to likes on facebook, and it’s clear likes on Facebook are able to mess up with peoples motivation in important ways. So the general impression is people are playing with something possibly powerful, likely without deep understanding, and possibly with a bad model of what the largest impacts are (focus of ordering of content, vs. subtle impacts on motivation)
In situations with such uncertainty, I would prefer the risks to be less correlated
edit: another feature request: allow to add co-authors of posts. a lot of texts are created by multiple people, ant it would be nice if all the normal functionality worked
Great point. I think it’s really interesting to compare the blog comments on slatestarcodex.com to the reddit comments on /r/slatestarcodex. It’s a relatively good controlled experiment because both communities are attracted by Scott’s writing, and slatestarcodex has a decent amount of overlap with EA. However, the character of the two communities is pretty different IMO. A lot of people avoid the blog comments because “it takes forever to find the good content”. And if you read the blog comments, you can tell that they are written by people with a lot of time on their hands—especially in the open threads. The discussion is a lot more leisurely and people don’t seem nearly as motivated to grab the reader’s interest. The subreddit is a lot more political, maybe because reddit’s voting system facilitates mobbing.
Digital institution design is a very high leverage problem for civilization as a whole, and should probably receive EA attention on those grounds. But maybe it’s a bad idea to use the EA forum as a skunk works?
BTW there is more discussion of the subforums thing here.
Good observation with the SSC natural experiment!
I actually believe LW2.0 is doing a pretty good job, and is likely better than reddit.
Just there is a lot of dilemmas implicitly answered in some way, e.g.
total utilitarian or average? total
decay with time or not? no decay
everything adds to one number? yes
show it or hide it? show it
scaling? logarithmic
This likely has some positive effects, and some negative ones. I will not go into speculation about what they are. Just if EAF2.0 is going this direction, I’d prefer the karma system to be sufficiently different from LW. E.g going average utilitarian and not displaying the karma would be different enough (just as an example!)
Also the academic literature on “social influence bias” (paper by Lev Muchnik, Sinan Aral and Sean J. Taylor from 2014 and followups) may be worth attention
Yeah maybe they could just select whatever karma tweaks would require the minimum code changes while still being relatively sane. Or ask the LW2.0 team what their second choice karma implementation would look like and use it for the EA forum.
I really want to highlight the small point that you made in the end:
I am personally very interested in this topic and there is a lot of depth to it. It would be awesome if this topic could gain more traction in the EA community as it seems to be one of the most important challenges for the near-to-medium term future. It may receive some conceptual attention in terms of AI alignment and more practical considerations in terms of AI development coordination but it is actually a much broader challenge than that with implications for all areas of (digital) live. If I find the time, I will try to put a comprehensive post on this together. Whoever is also interested in this topic please get in touch with me! (PM or alex{at}herwix.com)
My impression is that the subreddit comments can be longer, more detailed and higher quality than the blog comments. Maybe they are not better on average, but the outliers are far better and more numerous, and the karma sorting means the outliers are the ones that you see first.
The point re correlation of risks is an interesting one — I’ve been modelling the tight coupling of the codebases as a way of reducing overall project risk (from a technical/maintenance perspective), but of course this does mean that we correlate any risks that are a function of the way the codebase itself works.
I’m not sure we’ll do much about that in the immediate term because our first priority should be to keep changes to the parent codebase as minimal as possible while we’re migrating everything from the existing server. However, adapting the forum to the specific needs of the EA community is something we’re definitely thinking about, and your comment highlights that there are good reasons to think that such feature differences have the important additional property of de-correlating the risks.
That’s unfortunately not going to be possible in the same way. My understanding is that the Alignment Forum beta is essentially running on the same instance (server stack + database) as the LessWrong site, and some posts are just tagged as ‘Alignment Forum’ which makes them show up there. This means it’s easier to do things like have parallel karma scores, shared comments etc.
We see the EA Forum as a distinct entity from LW, and while we’re planning to work very closely with the LW team on this project (especially during the setup phase), we’d prefer to run the EA Forum as a separate, independent project. This also us the affordance to do things differently in the future if desired (e.g. have a different karma system, different homepage layout etc).
Thanks for info!
I think running it as a separate project from LW is generally good, and prioritizing move to the new system is right.
With the LW integration, ok, even if it is not technically possible to integrate in the alignment forum way, maybe there is some something-in-between way? (Although its probably more a question which to ask on the LW side.)
This forum is currently correlated with the EA subreddit with its conventional counting of votes, and if we went with a like system then it would be correlated with Facebook. I’m not sure what else you could do, aside from having no likes or votes at all, which would clearly be bad because it makes it very hard to find the best content.
I agree that it would be nice if the EA forum was implemented similar to the way Alignment Forum is being done, although since that is itself still in beta maybe the timeline doesn’t permit it right away. Maybe it’s something that could happen later, though?
As to risks with voting and comparison to likes on Facebook, I guess the question would be is it any worse that any system of voting/liking content? If it’s distorting discussions it seems unlikely that the change will be any worse than the existing voting system on this forum since they are structurally similar even if the weighted voting mechanism is new.
It’s a different question.
The worry is this: Two systems of voting/liking may be “equally good” in the sense that they e.g. incentivize 90% of good comments and disincentivize 10% of good comments, but the overlap of good things they disincentivize may be just 1%. (This seems plausible given the differences in the mechanism, way how it is displayed, and how it directs attention)
It makes a difference if you are using two different randomly broken system, or two coppies of one.
Are there any plans to evaluate the current karma system? Both the OP and multiple comments expressed worries about the announced scoring system, and in the present day we regularly see people complain about voting behaviour. It would be worth knowing if the concerns from a year ago turn out to have been correct.
Related to this, I have a feature request. Would it be possible to break down scores in a more transparent way, for example by number of upvotes and downvotes? The current system gives very little insight to authors about how much people like their posts and comments. The lesson to learn from getting both many upvotes and many downvotes is very different from the lesson to learn if nobody bothered to read and vote on your content.
Are there particular instances of complaints related to voting behavior that you can recall?
I remember seeing a couple of cases over the last ~8 months where users were concerned about low-information downvotes (people downvoting without explaining what they didn’t like). I don’t remember seeing any instances of concern around other aspects of the current system (for example, complaints about high-karma users dominating the perception of posts by strong-voting too frequently). However, I could easily be forgetting or missing comments along those lines.
Currently, you can see the number of votes a post or comment has received by hovering over its karma count. This does let you distinguish between “many upvotes and many downvotes” and “no votes”. Adding a count of upvotes and downvotes would provide more information about the distribution of strong votes (e.g. one strong upvote vs. several weak downvotes, or vice-versa). I can see how that could be useful, and I’ll bring it up with the Forum’s tech team to hear their thoughts. Thank you for the suggestion!
It’s not an instance of complain, but take it as a datapoint: I’ve switched off the karma display on all comments and my experience improved. The karma system tends to mess up with my S1 processing.
It seems plausible karma is causing harm in some hard to perceive ways. (One specific way is by people updating on karma pattern mistaking them for some voice of the community / ea movement / … )
Can you elaborate on how you turned off karma display? I would love to use your code if you’re willing to share it. I strongly dislike posting on the EA Forum because of how the karma system works, and and my experience would be vastly improved if I couldn’t see post/comment karma.
>> I’ve switched off the karma display on all comments and my experience improved. The karma system tends to mess up with my S1 processing.
Fully understand if you don’t want to, but I’m curious if you could elaborate on this. I’m not entirely sure what you mean.
As humans, we are quite sensitive to signs of social approval and disapproval, and we have some ‘elephant in the brain’ motivation to seek social approval. This can sometimes mess up with epistemics.
The karma represents something like sentiment of people voting on a particular comment, weighted in a particular way. For me, this often did not seemed to be a signal adding any new information—when following the forum closely, usually I would have been able to predict what will get downvoted or upvoted.
What seemed problematic to me was 1. a number of times when I felt hesitation to write something because part of my S1 predicted it will get downvoted. Also I did not wanted to be primed by karma when reading other’s comments.
On a community level, overall I think the quality of the karma signal is roughly comparable to facebook likes. If people are making important decisions, evaluating projects, assigning prices… based on it, it seems plausible it’s actively harmful.
I don’t have any specific instances in mind.
Regarding your accounting of cases, that was roughly my recollection as well. But while the posts might not address the second concern directly, I don’t think that the two concerns are separable. The actual mechanisms and results might largely overlap.
Regarding the second concern you mention specifically, I would not expect those complaints to be written down by any users. Most people on any forum are lurkers, or at the very least they will lurk a bit to get a feel for what the community is like and what it values before participating. This makes people with oft-downvoted opinions self-select out of the community before ever letting us know that this is happening.
The hovering is helpful, thank you.
This seems like a strong plan, and I’m glad you’ve thought things through thoroughly. I’ll just outline points of agreement, and slight differences.
I certainly agree with the approach of building EA Forum 2 from LessWrong 2. After all, the current EA Forum version was built from LessWrong 1 for similar reasons. We had a designer sketch the restyled site, and this was quite a positive experience, so I’d recommend doing the same with the successor. Basically, the EA Forum turned out quite a bit more beautiful than the LessWrong, and the same should be possible again. I think there are some easy wins to be had here, like making the LW2′s front-page text a bit darker, but I also think it’s possible to go beyond that and make things really pretty all-around.
I agree with keeping LW2′s new Karma system, and method of ordering posts, and I think that this is a major perk of the codebase. I’m also happy to see that you seem to have the downsides well-covered.
One small diff is when you say “Although CEA has a view on which causes we should prioritize, we recognize that the EA Forum is a community space that should reflect the community.” Personally, I think that forum administrators should be able to shape the content of the forum a little bit. Not by carrying out biased moderation, but by various measures that are considered “fair” like producing content, promoting content, voting, and so on.
I think the possible features are kind-of interesting. My thoughts are as follows:
different landing pages: may be good
local group pages: may be good, but maybe events are best left on Facebook. Would be amazing if you can automatically include Facebook events, but I’ve no idea whether that’s feasible.
additional subforums: probably bad, because I think the community is currently only large enough to support ~2 active fora, and having multiple fora adds confusion and reduces ease-of-use.
Single sign-on: likely to be good, since things are being consolidated to one domain.
Thanks again to Trike Apps for running the forum over all these years, and thanks to CEA for taking over. With my limited time, it would never have been possible to transition the forum over to new software, and so we would have been in a much worse position. So thanks all!
Hi Ryan, Thanks again for all setting up the Forum, and looking after it!
On some of the points you raise:
I agree that moderators should be able to produce content, and vote: we were not proposing that CEA staff or moderators would not do that.
I like the idea of integrating with Facebook events, I’ll add it to our list.
I also agree that the community is not currently large enough for many additional fora: if we implement this, it will be slowly and carefully.
I’ve not yet read it myself, but I’m curious if anyone involved in this project has read “Building Successful Online Communities: Evidence-Based Social Design” (https://mitpress.mit.edu/books/building-successful-online-communities). Seems quite relevant.
I actually have made detailed notes on the first 65% of the book, and hope to write up some summaries of the chapters.
It’s a great work. To do the relevant literature reviews would likely have taken me 100s of hours, rather than the 10s to study the book. As with all social science, the conclusions from most of the individual studies are suspect, but I think it sets out some great and concrete models to start from and test against other data we have.
Added: I’m Ben Pace, from LessWrong.
Added2: I finished the book. Not sure when my priorities will allow me to turn it into blogposts, alas.
That’s great to hear!
Oh thanks for sharing this!
I’m concerned with the plans to make voting/karma more significant; I would prefer to make them less significant than the status quo rather than more. Voting allows everyone’s biases to influence discussion in bad ways. For example, people’s votes tend to favor:
things they agree with over things they disagree with, which makes it harder to voice dissenting opinions
entertaining content over important but less-entertaining content
agreeable content without much substance over niche or disagreeable content with lots of substance
posts that raise easy questions and give strong answers over posts that raise hard questions and give weak answers
Sorting the front page by votes, and giving high-karma users more voting power, only does more to incentivize bad habits. I think the current voting system is more suited to something like reddit which is meant for entertainment, so it’s reasonable for the most popular posts to appear first. If the idea is to have “all of EA’s top researchers posting and commenting regularly”, I don’t think votes should be such a strong driver of the UX.
About a year ago I essentially stopped making top-level posts on the EA Forum because the voting system bothers me too much, and the proposed change sounds even worse. Maybe I’m an outlier, but I’d prefer a system that more closely resembled a traditional forum without voting where all posts have equal status. That’s probably not optimal and it has its own problems (the most obvious being that low-quality content doesn’t get filtered out), but I’d prefer it to the current or proposed system.
I just commented to SamDeere’s comment above about having multiple types of votes. One indicating agreement and one indicating “helpfulness”. Then you can sort by both, but the forum is sorted by default by “helpfulness”. Do you think this would fix some of your issues with a voting system?
Arbital uses a system where you can separately “upvote” things based on how much you like them, and give an estimate of how much probability you assign to claims. I like this system, and have recommended it be added to LW too. Among other things, I think it has a positive effect on people’s mindsets if they practice keeping separate mental accounts of those two quantities.
I think there’s another downside there: we should be wary of implementing a system that doesn’t have a track record. There are lots of forums that don’t have voting, and reddit-style voting has a long track record as well (plus Hacker News-style, which is similar but not quite the same as reddit-style). As you start introducing extra complexity, you don’t know what’s going to happen. Most possible designs are bad, and most designs we come up with a priori will probably be bad, so my inclination would be to stick close to a system that has a proven track record.
That said, having multiple types of upvotes could look something like Facebook which now has multiple types of likes, and we have at least some idea of what that would look like. So it might be a good idea.
I agree with this concern.
Even with some weighting for ‘long-timers’, 16x seems excessive.
The concern seems exacerbated by the idea of more active modeation
I’m not convinced that a forum being having diverse viewpoints already represented suffices to counteract this.
The distinction between modearting based on content and procedure (‘good discussion’) might be hard to uphold: disagreement on what constitutes a good argument is also important, for example.
The concern seems also exacerbated by a worry (which I tried to articulate elsewhere) of people established within the community possibly giving too much epistemic weight to someone being thus embedded.
I think the proposed karma system, particularly when combined with the highly rated posts being listed higher, is a quite bad idea. In general, if you are trying to ensure quality of posts and comments while spreading the forum out more broadly there are hard tradeoffs with different strengths and weaknesses. Indeed, I might prefer some type of karma weighting system to overly strict moderation but even then the weights proposed here don’t seem justifiable.
What problem is being solved by giving up to 16 times maximum weight that would not be solved with giving users with high karma “merely” a maximum of 2 times the amount of possible weight? 4 times?
While it may be true now that there are multiple users with high karma with very different viewpoints, any imbalance among competing viewpoints at the start of a weighted system could possibly feedback on itself. That is to say, if viewpoint X has 50% of the top posters (by weight in the new system), Y has 30%, and Z 20%, viewpoint Z could easily see their viewpoint shrink relative to the others because the differential voting will compound itself over time.
Thanks for the comments on this Marcus (+ Kyle and others elsewhere).
I certainly appreciate the concern, but I think it’s worth noting that any feedback effects are likely to be minor.
As Larks notes elsewhere, the scoring is quasi-logarithmic — to gain one extra point of voting power (i.e. to have your vote be able to count against that of a single extra brand-new user) is exponentially harder each time.
Assuming that it’s twice as hard to get from one ‘level’ to the next (meaning that each ‘level’ has half the number of users than the preceding one), the average ‘voting power’ across the whole of the forum is only 2 votes. Even if you make the assumption that people at the top of the distribution are proportionally more active on the forum (i.e. a person with 500,000 karma is 16 times as active as a new user), the average voting power is still only ≈3 votes.
Given a random distribution of viewpoints, this means that it would take the forum’s current highest-karma users (≈5,000 karma) 30-50 times as much engagement in the forum to get from their current position to the maximum level. Given that those current karma levels have been accrued over a period of several years, this would entail an extreme step-change in the way people use the forum.
(Obviously this toy model makes some simplifying assumptions, but these shouldn’t change the underlying point, which is that logarithmic growth is slooooooow, and that the difference between a logarithmically-weighted system and the counterfactual 1-point system is minor.)
This means that the extra voting power is a fairly light thumb on the scale. It means that community members who have earned a reputation for consistently providing thoughtful, interesting content can have a slightly greater chance of influencing the ordering of top posts. But the effect is going to be swamped if only a few newer users disagree with that perspective.
The emphasis on can in the preceding sentence is because people shouldn’t be using strong upvotes as their default voting mechanism — the normal-upvote variance will be even lower. However, if we thought this system was truly open to abuse, a very simple way we could mitigate this is to limit the number of strong upvotes you can make in a given period of time.
There’s an intersection here with the community norms we uphold. The EA Forum isn’t supposed to be a place where you unreflectively pursue your viewpoint, or about ‘winning’ a debate; it’s a place to learn, coordinate, exchange ideas, and change your mind about things. To that end, we should be clear that upvotes aren’t meant to signal simple agreement with a viewpoint. I’d expect people to upvote things they disagree with but which are thoughtful and interesting etc. I don’t think for a second that there won’t be some bias towards just upvoting people who agree with you, but I’m hoping that as a community we can ensure that other things will be more influential, like thoughtfulness, usefulness, reasonableness etc.
Finally, I’d also say that the karma system is just one part of the way that posts are made visible. If a particular minority view is underrepresented, but someone writes a thoughtful post in favour of that view, then the moderation team can always promote it to the front page. Whether this seems good to you obviously depends on your faith in the moderation team, but again, given that our community is built on notions like viewpoint diversity and epistemic humility, then the mods should be upholding these norms too.
Speculative feature request: side votes
The problem now is that some people use upvotes to indicate agreement, while others use it to indicate helpfulness (and many, I suspect, use it interchangeably). Having two types of votes clearly separates these two signals. A vote to the right would mean agree, a vote to the left would be disagree. It doesn’t necessarily need to be a sidevote, anothey symbol might be better, but it’s the idea of two types of votes that counts.
Downside: people are unfamiliar with it, and may be complex to implement. Further complicates the dynamics of upvotes that other people have mentioned in this comment section. However, I think it’s fairly straightforward and people will easily pick up on it. Because it won’t be confused with other systems (I don’t know other fora with multiple types of votes), people will easily read the mouse-over text to find out what the votes mean.
Thanks for the input Sam! How about the visibility of down-voted posts? If I recall right, currently if a post has −2 votes, per default it won’t be visible. Now if the first reader of the post is an older member with a lot of voting power, does this mean that they can single-handedly make a new post invisible?
My suggestion here would be to remove the default criterion for which posts are visible, so that per default all posts are visible (irrespective of the downvotes), but that people can select in their settings a threshold of votes a post should have in order to be visible.
Our proposal for how this would work is that all posts would be visible on personal blogs, but that posts with a negative karma score wouldn’t show up on the “frontpage” (the default view). People would still be able to see it on the “All posts” view until the post reached −5 karma, and would be able to upvote it back onto the frontpage. Sometimes this might lead to us losing quality posts, but it also helps prevent users seeing very low quality posts (e.g. spam).
Speculative feature request: anonymous commenting and private commenting
Sometimes people might want to comment anonymously because they want to say something that could hurt their reputation or relationships, or affect the response to the criticism in an undesirable way. For example,. OpenPhil staff criticising a CEA or 80K post would have awkward dynamics because OpenPhil funds these organizations partly. Having an option to comment anonymously (but let the default be with names) will allow more free speech.
Relatedly, some comments could be marked as “only readable by the author”, because it’s a remark about sensitive information. For example, feedback on someone’s writing style or a warning about information hazards when the warning itself is also an information hazard. A risk of this feature is that it will be overused, which reduces how much information is spread to all the readers.
Meta: not sure if this thread is the best for these feature requests, but I don’t know where else :)
Forgive me if I’m being slow, but wouldn’t private messages (already in the LW2 codebase) accomplish this?
Yes you’re right and I had not thought of it. I still think private commenting has a slight benefit because it lowers the barrier (I frequently comment on posts, but wouldn’t send someone a private message). However, I don’t think the benefit is big enough to put effort into.
I agree that it’s sometimes useful for people to be able to post anonymously. Currently this is done by people creating separate anonymous accounts, which seems like a reasonable work-around. (And +1 to Greg’s comment about your second use case.)
How anonymous would you want this to be? Like would the mods still know who posted it?
(I’m not the OP.) How about nobody knows who submits it but the comment only appears if the mods approve it? And to deter rule-violating comments, maybe only people with a certain level of karma should be allowed to submit anonymous comments and they should lose a certain amount of karma if the comment is rejected (though we’d have to figure out how to hide that loss from the moderators)?
I would first opt for the low-effort approach: allow anonymous commenting and only have mods step in when it’s reported. If this doesn’t work, then you could have all anonymous comments moderated.
I think having a certain level of karma (not high) is a good addition.
I think the mods might know, as long as we won’t have too many mods. I have no strong opinion either way.
Thank you Marek and the whole CEA team for taking on this project! I love your initiative and what you outline seems like a very valuable and necessary step for the EA community. If things work out as you imagine, EA could be one of the first science-driven communities with a strong “community-reviewed” journal type offering (in this vein it may make sense to introduce different types of “publications” – idea, project report, scientific publication, etc. – with different standards for review and moderation). Very inspiring!
A question that comes to my mind would be your plans and stance on making user profiles/data accessible to external partners and integrations. For example, I am investing some time into thinking about the funding pipeline in EA right now, in particular with a focus on small scale projects which seem to be falling through the cracks right now. Having a funding platform integrate with the community system and trust measures of the EA forum could be a game changer for this (for people interest in this topic get in touch on the rethink slack #ti-funding or https://gitlab.com/effective-altruism/funding-pipeline – it’s not much put down right now, but there are already some people interested in this space). Given that the Less Wrong 2.0 codebase is open source it should be possible to develop secure means of integration between different platforms if the provider of the forum enables it. Did you consider these kind of long-term use cases in your planning so far? Do you have a vision for how collaboration with “non-CEA” affiliated projects could look in the future?
Two thoughts, one on the object-level, one on the meta.
On the object level, I’m skeptical that we need yet another platform for funding coordination. This is more of a first-blush intuition, and I don’t propose we have a long discussion on it here, but just wanted to add my $0.02 as a weak datapoint. (Disclosure — I’m part of the team that built EA Funds and work at CEA which runs EA Grants so make of that what you will. Also, to the extent that the sense that small projects are falling through the gaps because of evaluation-capacity constraints, CEA is currently in the process of hiring a Grants evaluator.)
On the meta level (i.e. how open should we be to adding arbitrary integrations that can access a user’s forum account data) I think there’s definitely some merit to this, and that I can envisage cool things that could be built on top of it. However, my first-blush take is that providing an OAuth layer, exposing user data etc, is unlikely to be a very high priority (at least from the CEA side) when considered against other possible feature improvements and other CEA priorities, especially given the likely time cost involved in maintaining the auth system where it interfaces with other services, and the magnitude of the impact that I’d expect having EA Forum data integrated with such a service would have. However, as you note, the LW codebase is open source, so I’d suggest submitting an issue there, discussing with the core devs and making the case, and possibly submitting a PR if it’s something that would be sufficiently useful to a project you’re working on.
Thanks for your comment Sam!
Regarding the funding project, this was just meant to be one example of a possible project which could profit from integration. So I wasn’t really trying to argue the merits of the projects in any detailed manner. However, to somewhat counterbalance your argument, the way I see it, it seems to make sense to try to aggregate resources around existing funding opportunities to help people try to understand the space better. From my own experience it takes some time to wrap your head around who is offering what, in what format, etc. So there seems to be room for improved coordination which may or may not involve new artifacts/software/platform to be developed. Moreover, having people be interested in this kind of topic seems to be a win for the community to me, I don’t think we are at the point where returns are diminishing drastically (i.e., one CEA grant evaluator is not gonna fix everything). If you want to talk in more depth about the topic I would love for you to join the slack channel or contact me by mail: alex{at}herwix.com.
Regarding the meta point, this really was the gist of my post. I appreciate the positive attitude you seem to have towards a somewhat “open” model as I think that this would be an important step for the community from a technological point of view. As you say there are lots of cool things that could be done, once we have sorted out some of the basic infrastructure questions. CEA being open to integrating pull requests in this direction would be an awesome first step :)
The problem with down-voting is that it allows for views to be dismissed without any argument provided. It’s kind of bizarre to give a detailed explanation why you think X is Y, only to see someone has down-voted this without explaining a tad bit why they disagree (or why they “don’t find it useful”). I just can’t reconcile that approach with the idea of rational deliberation.
One solution would be to demand that every down-vote comes with a reason, to which the original poster can reply.
This has been proposed a couple of times before (/removing downvotes entirely), and I get the sentiment than writing something and having someone ‘drive-by-downvote’ is disheartening/frustrating (it doesn’t keep me up at night, but a lot of my posts and comments have 1-2 downvotes on them even if they end up net-positive, but I don’t really have a steer as to what problem the downvoters wanted to highlight).
That said, I think this is a better cost to bear than erecting a large barrier for expressions of ‘less of this’. I might be inclined to downvote some extremely long and tendentious line-by-line ‘fisking’ criticism, without having to become the target of a similar reply myself by explaining why I downvoted it. I also expect a norm of ‘explaining your reasoning’ will lead to lots of unedifying ‘rowing with the ref’ meta-discussions (“I downvoted your post because of X”/ “How dare you, that’s completely unreasonable! So I have in turn downvoted your reply!”)
Hey Gregory, thanks for commenting on this. The problem with the idea that downvoting signifies “less of this” is that the poster has no clue as for what that refers to, and hence they’re at a loss in trying to reduce less of that. And after all, why would they? All one can conclude is: “There are people here who don’t like reading this. Well, that tells me more about this audience (unable to critically engage with my points) and their biased viewpoints than about my post. In fact, it doesn’t tell me anything about the arguments provided in my post.”
As for meta-discussions on the reasons for down-voting, I think they’d be rather healthy: they’d expose both expectations, values and even biases held by the forum’s participants.
One downside of critical comments is they tend to draw attention to the discussion. Mass downvoting suggests that something is so low quality you don’t have to pay attention to it.
Yeah, in case of obvious crap posts (like spams) they’ll be massively downvoted. Otherwise, I’ve never seen here any of the serious posts massively only downvoted. Rather, you’d have some downvotes, some upvotes, and the case you describe doesn’t capture this situation. In fact, an initial row of downvotes may misleadingly give such an impression, leading to some people ignoring the issue, while later on a row of upvotes may actually show the issue is controversial, and as such indeed deserves further discussion.
Hey Dunja, it’s true that a downvote provides less information than a comment, but I think it does provide some information, and that people can update based on that information, particularly if they get similar feedback on multiple comments: e.g. I might notice “Oh, when I write extremely short comments, they’re more likely to be downvoted, and less likely to be upvoted. I’ll elaborate more in the future” or similar.
Hi Max! I agree, it indeed provides information, but the problem is that the information is too vague, and it may easily reflect a sheer bias (as in: “I don’t like any posts that question the work of OpPhil”). I think this is a strong sentiment in this community and as an academic who is not affiliated with OpPhil or any other EA organization, I’ve noticed numerous cases of silent rejection of a certain problem. I don’t think the issues are relevant for any “mainstream” EA topic (points on which the majority here agrees). But as soon as it comes to the polarized issues (say, the funding of non-academic institutions to conduct academic research), the majority that downvotes doesn’t say a word. I found it quite entertaining (but also disappointing) when I made a longer post on this topic, only to find bunch of downvotes without concrete engagement in the topic. My interpretation of what’s happened there: people dislike someone making waves in their little pond.
I understand you may wish to proceed as you’ve suggested, but eventually this community will push away dissenters, who are very fond of EA, but who just don’t see any point in presenting critical arguments on this platform.
Many of these concerns seem to be symmetric, and would also imply we should make it harder to upvote.
Yes, that’s a good point, I’ve been wondering about this as well. According to one (pretty common) approach to argumentation, an argument is acceptable unless challenged by a counterargument. From that perspective:
upvoting = an acknowledgement of the absence of a counterargument.
downvoting = an observation that there is a counterargument, in which case it should be stated.
This is just an idea from the top of my head, I’d be curious to discuss this in more detail since I find it genuinely curious :)
How about making it so that a menu pops up when you click the downvote button? There could be a number of default options (e.g. personal attack, unsupported assertion, spam etc.) and an option to write-in a brief explanation (perhaps limited to 140 characters). That would ensure that the poster gets some feedback without requiring every downvoter to provide an explanation.
I think that this looks like a promising feature, I’ll add it to our list of things we might do once the beta is stable.
That’d probably be already better than nothing ;) Then again, I’m afraid most people would still just (anonymously) downvote without giving reasons. It’s much easier to hide behind an anonymous veil than take a stance and open yourself for debate.
In fact, I’d be curious to see some empirical data on how correlated the act of downvoting and the absence of commenting are. My guess is that those who provide comments (including critical ones) mostly don’t downvote except in extreme cases (e.g. discrimination, obviously off-topic for the forum, obviously misinformation, etc.).
Just to clarify, my proposal is that the downvote would only be counted if the person selected a reason. When I said “without requiring every downvoter to provide an explanation,” I meant without requiring every one of them to type out their own explanation (since they can rely on the defaults or on what a previous person has written).
Ahh, now I get you! Yeah, that sounds like a good idea! Like I’ve mentioned in another reply, I wouldn’t require the same from upvotes because they may imply the lack of counterarguments, while a downvote implies a recognition that there is a problem, in which case it’d only be fair to state which one it is.
If we upvote someone’s comments then we trust them to be a better authority, so we should give them a greater weight in vote totals. So it seems straightforward that a weighted vote count is a better estimate of the quality of a comment.
The downside is that can create a feedback loop for a group of people with particular views. Having normal votes go from 1x to 3x over the course of so many thousands of karma seems like too small a change to make this happen. But the scaling of strong votes all the way up to 16x seems very excessive and risky to me.
Another downside is that it may encourage people to post stuff here that is better placed elsewhere, or left unsaid. I think that after switching to this system for a while, we should take a step back and see if there is too much crud on the forums.
I think the LW mods are considering features that will limit how many strong upvotes users can give out. I think the goal is for strong upvotes to look less like “karma totals get determined strictly by what forum veterans think” and more like “if you’re a particularly respected and established contributor, you get the privilege of occasionally getting to ‘promote/feature’ site content so that a lot more people see it, and getting to dish out occasional super-karma rewards”.
Hey, first of all, thanks for what I’m sure what must have been a lot of work behind this. Many of these ideas seem very sensible.
Am I right in assuming that the scale for the upvotes was intended to be roughly-but-not-exactly logarithmic? And do downvotes scale the same way?
Yep, we chose them based on a logarithmic scale, and then jiggled them around to fit to fuller numbers.
And yep, vote power also applied to downvotes, at least on LW, and would be somewhat surprised if we would do something else here.
The post says that no user has more than 1,000 karma. But Peter_Hurford has more than 8,000 karma. I’m bringing this up not to quibble but rather because I’m wondering whether the threshold was meant to be set differently or perhaps users will lose some fraction of their karma during the switch.
Also, for link posts, I think it might be a good idea to require cross-posting the summary or an excerpt.
Yes, that is a little ambiguous. It is trying to say that no user is at the 3-point level (if you have 25,000 karma or more). Currently no user is above the 2-point level for regular up/downvote and 8-point for the strong up/downvote. We have no plans to adjust the karma in the switch.
Re link posts, that does seem like a good idea. We will be publishing a much more detailed ‘Proposed Moderation Standards’ post closer to launch.
[edited for clarity]
That totally makes sense now. Thanks for the reply.
If you add a tag feature, can you make it so that authors can add tags to posts imported from EA Forum 1.0? I think it’d be great if someone interested in animal suffering could easily see all the EA Forum posts related to animal suffering.
And would you be willing to add a feature that allows you to tag individuals? (For this to work, you’d have to provide notifications in a more prominent way than the current ‘Messages’ system.)
I’ve always found the “karma” system utterly repulsive and deeply disturbing (across online forums in general) . It’s a tool that can so easily catalyze bias and censorship, to the point that it becomes way more dangerous than it is useful. And the addition of older members having higher votes is extremely dangerous in preventing new members to question ideas from the dominant majority, hence leading to dogmatism.
The assumption that this will be prevented by already existing variety of views is not at all good enough guarantee: on the one hand, all the current members may share a certain (unreflected) bias; on the other hand, some members may become less active in certain periods of time, which may break the system of the plurality of views that’s supposed to keep each other’s biases in check.
What’s the alternative? Perhaps value-based votes, allowing you to see what like-minded people (your interest-neighbors, so to say) like. Think of last.fm and the way it ranks music that’s recommended to you, given what your neighbors are listening, where you can still check the newest or most liked stuff even if it doesn’t belong to your immediate set of preferences. If that’s hard to implement, well that doesn’t mean taking a ticket towards potentially dogmatic and undemocratic community is a way to go (where by undemocratic I mean directly impeding democratic principles, such as the ability of a community to sustain the challenge of the minority opinions, and to preserve channels via which those opinions can be heard and openly argued with).
By the way, I don’t think the idea of karma has anything to do with “elitism”. It has to do with in-group bias, and dangers that emerge from it, such as the censorship of the minority views. So if I weigh the danger of in-group bias vs. a bit tedious search of many posts, I’d always prefer the latter.
Thank you for your very interesting and thoughtful comment!
I just want to extend your thinking a little bit further into possible solutions. The blockchain space in particular has provided some interesting new ideas in terms of trust and how to organize communities around it. For example, Stellars Consensus Protocol works with “Quorum Slices” that are determined by people you trust to give you a “personal” view on the overall state. Similar you could nominate a “Member Slice” where some member votes are excluded/weighted down or weighted up in the calculation of your post weights. This would allow you to tailor what you see to your needs as your thinking evolves. So if a tyranny ensues you have the possibility of “navigating around”. And depending on how you implement it, people could subscribe to your view of the forum and thus, propagate this new algorithm for weighting posts. Hope this is not too complicated… (for those interested in more details, here is a link to a graphic novel explaining the Stellar CSP: https://www.stellar.org/stories/adventures-in-galactic-consensus-chapter-1)
my main point was just to agree with you that having a very hierarchical voting system may profit from some “countermeasures” that can be used in times of misuse or tyranny.
Thanks for the input, alexherwix! This proposal sounds very interesting. In general, I find this question really challenging: which model of quality control best mitigates the dangers of an in-group bias? On the one hand, the model you suggest (which seems quite close to what I had in mind above) seems really appealing. On the other hand, it would be interesting to see actual studies on the comparative impact of different solutions: e.g. the trust-based mechanism vs. top-down (“institutional”) injection of opposing views. For example, the controversial tab on reddit seems to do a nice job in keeping polarizing views around.
DON’T BE A NECROMANCER!
LessWrong 2.0 resurrected all posts deleted by original posters which then had to be individually deleted, by users who may or may not be aware that this had happened. Please ensure this isn’t replicated with Effective Altruism Forum 2.0. If I can’t control my content I’ll move somewhere else that I can.
Huh, I am unaware of this. Feel free to ping us on Intercom about any old posts you want deleted. The old database was somewhat inconsistent about the ways it marked posts as deleted, so there is a chance we missed some.
Yeah MoneyForHealth, it does seem like it would be useful if you can point out instances of this happening on LW. Then we’ll have a better shot at figuring out how it happened, and avoiding it happening with the EA Forum migration.
Would it be possible to introduce a coauthoring feature? Doing so would allow both authors to be notified of new comments. The karma could be split if there are concerns that people would free ride.
I’d like to echo a strong concern that karma based voting will lead to groupthink etc.
I’d feel substantially better if it was karma based upvotes only, and no karma based downvotes. Karma based downvotes allow community insiders to effectively kill posts.
Feature request: show reading times
It would be useful to show approximate reading times for posts, because readers can decide whether to commit to a long article or not. This saves valuable time of EA’s, and improves the engagement with the post.
Yup, we actually already built this for LessWrong 2.0 (check it out on the frontpage, where each post says how many minutes reading it is), and so you’ll get them when the CEA team launches the new EA Forum 2.0.
This is my first comment, but I’m currently writing and hoping to post some detailed reports later. Are there any restrictions on new users posting? I haven’t tried posting yet, so I’m not aware if there are currently any restrictions in place but will there be new restrictions on new users in the future other than the karma system?
Currently I believe we have a threshold of 14 karma to post. If you have a post you’d like to submit before you’ve reached that level, you can write to the moderators at forum@effectivealtruism.org and we’ll review the post and give you posting ability if approved.
Good question about whether this will be the same in the new system—we’ll make a decision about that, but the default is to keep the threshold similar.
Would it be possible to have a ‘sort by area’ option? To see what people in the local community are writing, reading, commenting on, working on etc. May need location tags or location to be listed in profiles.
Leading from this, will the views each post gets be measured? Is that currently used to rank ‘top’ posts? Is the ratio of people reading with a hub profile to without a hub profile measured?
Having a quick suggestions/feedback form at the side of the main page or FAQ section may be useful. I’ve introduced them onto a few other online communities and they’ve been more popular and used than expected.
Would it be possible for you add a minimum font size requirement? Posts like this one are hard for me to read.