Agree/disagree voting (& other new features September 2022)
We’re[1] enabling two-factor voting (karma and agree/disagree) on all (new) comment threads in the Forum. We’ll be reviewing feedback and will evaluate how the change has gone in October, but we expect to keep the feature.
We wrote this post to share this update and some other changes. On that note, the Forum team continues to grow; Ollie Etherington and Will Howard recently joined our team, and we’re eagerly anticipating our new product manager. We’re also still excited to hear about promising UX designers interested in working on the Forum.
As always, feedback is welcome — you can comment on this post with specific input, or request more features via the Forum feature suggestion thread.
Summary of the changes
Two-factor voting is going live on the Forum. This adds agree-disagree voting on top of the usual karma system. ⬇️
We’ve started curating Forum posts that don’t get enough visibility or are especially good as examples of what Forum posts should be (according to us). ⬇️
There’s finally a way to copy-paste from a Google Doc with footnotes. ⬇️
We’re testing a 1:1 service to connect people interested in working in a field with experts in that field (the current service is for people interested in mitigating global catastrophic biological risks). ⬇️
Crossposting to and from LessWrong is easier. ⬇️
You can add topics to your public profile. ⬇️
Some other changes ⬇️
What’s new
Two-factor voting (karma and agree/disagree) on all comment threads
You can now vote separately on whether you appreciate a comment (and think more people should see it) and on whether you agree with the contents of the comment. Only the first of these will affect the poster’s karma.
The LessWrong team implemented this feature on LessWrong two months ago, and their announcement post shares details about how it works and what they like about it; we echo basically everything in that post. Some key points made in that post:
Agree/disagree voting does not translate into a user’s or post’s karma — its sole function is to communicate agreement/disagreement. It has no other direct effects on the site or content visibility (i.e. no effect on sorting algorithms).
For both regular voting and the new agree/disagree voting, you have the ability to normal-strength vote and strong-vote. Click once for normal-strength vote. For strong-vote, click-and-hold on desktop or double-tap on mobile. The weight of your strong-vote is approximately proportional to your karma on a log-scale (exact numbers here).
We’re really grateful to the LessWrong team for all their work on this and are excited about this as a potential improvement on the Forum.
How sure are we that we want to keep this feature?
Our default plan is to keep it, although we welcome feedback and plan to re-evaluate in October. We had planned to test the feature on the Forum more thoroughly by asking authors to opt into enabling it on their posts’ comment sections. However, the testing has been very slow, and, seeing the results on LessWrong (and on the Forum), we were increasingly feeling like we were wasting a great tool.
Curation of especially good posts
The number of posts that get shared on a single day is growing, and it’s increasingly hard for Forum users to keep up and see the best and most relevant-to-them posts. One way we’re trying to address this is by curating some of our favorite posts. We’ll generally leave a comment explaining why we curated the post.
How this works
The exact details of this system might change, but for now, they’re as follows. When a post is curated, it reappears at the top of the Frontpage with a little star next to it. The timestamp on it will show the time it got curated (although the timestamp on the post page itself will remain unchanged).
If two or more posts are curated, the three most recently curated will go to the top of the Frontpage. They’ll generally go back to where they “should” be when you open them (they’ll go to where they would normally be on the Frontpage, given their timestamp and karma), except the most recently curated post (or the one that you opened last) but if you open them, only one of them will remain at the top of the page for you.
Who decides what to curate
Some people have the ability to “suggest curation” by clicking on the three buttons under the title of a post and selecting the appropriate option. For now, Lizka (that’s me) is the only one making the final decisions.
Reasons for curating a post
We think it’s just extremely valuable or important, or it prompts a conversation that’s very important to have
We think it’s very valuable and was overlooked for some reason (like an unfortunate time-of-posting or a boring title)
We want to signal that this is the kind of post we want more of on the Forum
You’re very welcome to share feedback on curation.
Copy-pasting footnotes from a Google Document
We’re excited to share that we’ve developed a much-asked-for feature. If you’re copy-pasting from a Google Document, you can now copy-paste footnotes in the normal (WYSIWYG) post editor in the following way.[2]
Publish your Google Doc to web
You can do this by clicking on File > Share > Publish to web
Then approve the pop-up asking you to confirm (hit “Publish”)
Then open the link that you’ll be given; this is now the published-to-web version of your document.
Select the whole text, including footnotes, and copy that. (If you’d like, you can now unpublish the document.)
Open the Forum text editor (WYSIWYG), and paste the selection.
Instructions in image form:
If you don’t want to copy-paste, you can insert footnotes manually or use Markdown.
1:1 service to get advice on biosecurity
You can get some personalized advice from a professional working to reduce global catastrophic biological risks. The 30-minute meeting with an advisor will allow you to get career advice, learn about their experience, and ask questions about topics of common interest.
We’re testing this service, and depending on how the test goes, we might expand the service to other fields and professions.
This is how the service looks right now:
Crossposting to and from LessWrong is easier
When you draft a post on the Forum, you now have the option to automatically crosspost it to LessWrong (you’ll need to be logged in on both sites), and vice versa. While you’re editing the post, you should go to “Options” on the bottom, and select “Crosspost to [LessWrong].”
Any updates you make in the original post will appear in the crosspost. The comment sections will be distinct, but both will prompt users to see the other.
You can add topics to your public profile
If you’d like to share your interests, you can add topics to your public profile, which will also subscribe you to them by default (you can undo that). To do this, go to “Edit Profile” from your user profile page, then scroll down to “My Activity” and search for the topics you’re interested in. (Here’s a page that lists all topics.)
Other changes
Some of the smaller changes are parts of larger projects, and some are more routine improvements to the Forum (e.g. elimination of ongoing bugs).
We’re ending (un-pinning) the “Who’s hiring” and “Who wants to be hired” threads this week. We might bring back a variation on them, but we’re currently evaluating how they went. If you have any feedback, please feel free to share it!
We’ve added profile images to private messages.
We’ve fixed a bug in our “Report user” button, so that should work properly now.
You can pin comments in your profile.
If you have edit access to an event (if you’re an author of the event, or if you’re a declared group organizer of the group hosting it), you can duplicate the event so that you don’t need to start from scratch when creating a new event.
Note from Lizka: I’m really grateful to everyone who helped with these features, including everyone who worked on the code, the users who reported issues or contributed user interviews, and the other people supporting the Forum in a variety of other ways. Thanks, all!
- ^
This post was written by Lizka and reviewed by JP Addison and some other members of the Forum team.
- ^
This allowed us to get around the impossibility of selecting all footnotes in a Google Doc.
- Doing EA Better by 17 Jan 2023 20:09 UTC; 261 points) (
- CEA’s work in 2022 by 21 Dec 2022 9:21 UTC; 78 points) (
- Search, subforums, and other Forum updates (October 2022) by 25 Oct 2022 23:08 UTC; 58 points) (
- EA Organization Updates: September 2022 by 14 Sep 2022 15:50 UTC; 40 points) (
- EA & LW Forums Weekly Summary (5 − 11 Sep 22’) by 12 Sep 2022 23:21 UTC; 36 points) (
- EA & LW Forums Weekly Summary (5 − 11 Sep 22′) by 12 Sep 2022 23:24 UTC; 24 points) (LessWrong;
- 30 Jan 2023 0:07 UTC; 10 points) 's comment on Spreading messages to help with the most important century by (
- 5 May 2023 17:21 UTC; -1 points) 's comment on EA Forum feature suggestion thread by (
re hiring thread: I at least am still subscribed to the “Who’s hiring” thread and I read every comment.
re agreement karma: I still really don’t like it and find it very confusing ):
I can’t imagine a case where I strongly disagree with something, but want to increase its visibility to others, nor a case where I want to decrease visibility (e.g. because something is demagogic) but still want to signal that I agree with the conclusion.
I think your comment is a good example (and from the votes it looks like I’m not the only one). You’re making a good faith, sensible argument for a position I don’t hold—I think the disagreement karma is a big improvement.
I think your comment deserves an upvote for contributing to the discussion, but I disagree and wanted to indicate that.
I’m really enjoying the irony. But still in the vast majority of cases my regular and agreement votes would go the same way. I downvote comments when I think they cause harm or promote bad ideas (which necessarily means I disagree with them), and strongly downvote them when they promote outright dangerous ideas.
I think that’s fine, and you can just do this :) If a feature isn’t useful to you, you don’t have to use it.
This seems like a pretty dishonest action to me fwiw, unless you’re referring to technical information hazards (in which case reporting the comment is also appropriate).
Though perhaps I’m misunderstanding you.
Why dishonest? What do you take a strong downvote to mean? I think I’m really misunderstanding most people here’s notion about the role of upvoted and downvotes.
As examples for both my stated actions, if a user wrote “you’re suggesting something that Trump wanted to do, so I think it’s bad” I’d downvote that comment; If a user wrote “The public doesn’t know what’s good for them, we should eventually find a way to do good without ever having to answer to politicians”, I’d think that’s the kind of arrogance that’s outright dangerous and should be contained, and I’d therefore strongly downvote it.
It’s a separate discussion that I’m planning to write a post about (but probably never will 😅) - but I think EAs widely overestimate the size of the space of infohazards, and almost no comment a sane person could make would ever be one. I further think this is dangerous in itself, as it builds on a wrong belief that we’re better equipped to tackle problems than bad actors are to rediscover them.
So if someone wrote a detailed recipe for a novel pathogen, yeah I’d report them. Anything less than that, not really.
I don’t understand your logic at all. How is it contributing from your POV?
Found this, a good example
Easy answer, any uncomfortable/repungant conclusion would fall under: upvote on karma but downvote on disagree/agree.
One example is this uncomfortable conclusion:
https://forum.effectivealtruism.org/posts/t3Spus6mhWPchgjdM/valuing-lives-instrumentally-leads-to-uncomfortable
One of the most important skills in life is to separate uncomfortable/repugnant conclusions from their truth values. In other words, just because a conclusion is uncomfortable/repungant does not equal that the conclusion is false, and vice versa, comfortable conclusions do not equal true conclusions.
(I think it’s likely that I misunderstood at least some of the other arguments in this thread).
I think good arguments with uncomfortable/repugnant conclusions should be a) upvoted to the extent that they are good arguments and b) agreed or disagreed with to the extent that we believe the conclusions are true.
(and we may believe the bottom-line conclusions to be false for reasons that are outside the scope of the presented arguments).
I think we should be very willing to accept uncomfortable/repugnant conclusions to the extent that we believe they’re true. Our movement is effective altruism, not effective feel-good-about-ourselvesism. Since we probably live in the midst of multiple unknown moral catastrophes, one of the most important things we can do (other than averting imminent existential risk) is to carefully figure out which are the avertable moral catastrophes we currently live in. This search probably means evaluating the evidence we have, and seek out new evidence, and look at the world with deliberation, care, and good humor. In comparison, I expect moral disgust to be substantially less truth-tracking in comparison, and on the margins even net negative.
Losing access to our ability to think clearly is just really costly[1]. I’m not saying that we shouldn’t give this up at any price. But we should at least set the price to be very very high, and not be willing to sacrifice clear thinking quite so easily.
(“At first they came for our epistemology. And then they...well, we don’t know what happened next”)
A repugnant conclusion is only as true as the assumptions that went into it and the inference rules that chain it to them. I would agree with (and upvote) a comment that says “your assumptions ABC imply conclusion X which is horrible, so they can’t be right as stated”, and would disagree with (and downvote) a comment that says “Not only are you right about ABC, but we should even act according to conclusion X that they imply, even if it would seem horrible to some”.
Edit: I forgot to add that, while it’s a minor point in your comment, I really disagree that that’s “one of the most important skills in life”. Some applications might be important, e.g. “believing your plan is going to fail early enough to pivot to something else”, but there are quite a few more important ones.
The curation discussion made me think of this recent shortform post: “EA forum content might be declining in quality. Here are some possible mechanisms: [...]”
It seems like there has been an effort to get people less intimidated about posting to the Forum. I think this is probably good—intimidation seems like a somewhat bad way to achieve quality control. However, with less intimidation and higher post volumes, we’re leaning harder on upvotes & downvotes to direct attention and achieve quality control. Since our system is kind of like reddit’s [I believe reddit is the only major social media site that’s primarily driven by upvotes+downvotes rather than followings and/or recommendations], the obvious problems to fear would be the ones you see when subreddits get larger:
People who disagree with the current consensus get dogpiled with downvotes and self-select out of the community
Memes get more upvotes than in-depth content since they are more accessible and easier to consume
(My sense is that these are the 2 big mechanisms behind the common advice to seek out niche subreddits for high-quality discussion—let me know if you’re a redditor and you can think of other considerations.)
Anyway, this leaves me feeling positive about two-factor voting, including on toplevel posts. It seems like a good way to push back on the “self-selection for agreement” problem.
It also leaves me feeling positive about curation as a way to push back on the “popcorn content” problem. In fact, I might take curation even further. Brainstorming follows...
Imagine I am a forum user thinking about investing several weeks or months writing an in-depth report on some topic. Ian David Moss wrote:
Curation as described in the OP helps a bit, because there’s a chance someone will notice my post while it’s on the frontpage and suggest it for curation. But imagine I could submit an abstract/TLDR to a curator asking them to rate their interest in curating a post on my chosen topic. After I finish writing my post, I could “apply for curation” and maybe have some back-and-forth with a curator to get my post good enough. Essentially making curation on the forum work a bit like publication in an academic journal. While I’m dreaming, maybe someone could be paid to fact-check/red team my post before it goes live (possibly reflected in a separate quality badge, or maybe this should actually be a prereq for curation).
I think academic journals and online forums have distinct advantages. Academic journals seem good at incentivizing people to iron out boring details. But they lack the exciting social nature of an online forum which gets people learning and discussing things for fun in their spare time. Maybe there’s a way to combine the advantages of both, and have an exciting social experience that also gets boring details right. (Of course, it would be good to avoid academic publishing problems too—I don’t know too much about that though.)
Another question is the role of Facebook. I don’t use it, and I know it has obvious disadvantages, but even so it seems like there’s an argument for making relevant Facebook groups the designated place for less rigorous posts.
I’m not sure how much of a pain this would be implementation-wise (or stylistically), but I’d be curious to see agree/disagree voting for posts (rather than just comments). After all, arguments for having this type of voting for comments seem to roughly generalize to posts, e.g. it seems useful for readers to be able to quickly distinguish between (i) critical posts that the community tends to appreciate and agree with, and (ii) critical posts that the community tends to appreciate but disagree with.
Strong agree—there’s plenty of posts that I think are rigorous, well-written, interesting etc., but disagree with their conclusion or general stance. It might also offer a more useful (and maybe less spicy)‘sort by controversial’ function, where you can see posts that are highly upvoted but torn on agreement.
I love these changes, especially dis/agree voting! Thank you!
I am looking forward to someone creating a wacky dashboard where we can learn who are the most-upvoted but also most-disagreed-with posters on the Forum. If we think EA is getting too insular / conformist, maybe next time instead of a Criticism & Red-Teaming contest, we could give out an EA Forum Contrarianism Prize! :P
(related post)
Great to hear this feature now exists! (At least if it’s fairly easy to use—I haven’t tried it yet.)
Fwiw, this seems like a big enough deal to me and various EA researchers I know that I think it’d be worth having a separate post or other announcement about that, to increase how many people learn about it. I think many experienced Forum users won’t read this whole post nor re-read the Forum user manual, so they may by default for a while continue using less convenient approaches to footnotes or sometimes not bothering to post footnote-heavy things to the Forum.
(But I’ll also go ahead and announce that part of this post to Rethink Priorities staff now, to at least make that group aware of this.)
Love the new voting axis.
Would it be possible to add a forum-wide search/sorting option for comments that score unusually high on the negative product of agreement and karma? It would help with finding posts that people really appreciate but still disagree with.
Usually, karma is strongly correlated with agreement on some level, even in this system. So if a comment has high disagreement and high karma, the karma has been deconfounded—it seems much more likely that people have updated on it or otherwise thought the arguments have gone underappreciated. And if a high proportion of people updated on it, then it’s more likely that I will too.
Finding comments like this is a great way for me increase my exposure to good arguments I haven’t encountered before.[1] If this sorting option existed, it would be the primary benefit of the agreement axis for me.
In general, I think research communities should prioritise the flow of information that updates people’s models of things (i.e. gears-level/model-building evidence as opposed to testimonial evidence). This is a departure from academic “veritistic” social epistemology, where the explicit aim is usually to increase average epistemic accuracy by making people update on testimony correctly. But most research in EA, I think, isn’t bottlenecked by more accurate beliefs (selecting the best-fit beliefs out of prevailing options). Instead, I think EA is bottlenecked by new insights and models, and you increase the rate of those by having more people exposed to gears-level evidence.
Can you give an example of a comment you really disagreed with, yet made you change your beliefs?
There are many that I can’t recall, but these two comments made by Matthew Barnett and Paul Christiano are two examples. I mildly disagree the former, and I strongly disagree with the latter, but still found both of them very helpfwl.
Excellent. Agree/disagree voting is not only great, it’s one of the easiest ways to explain how EA and LW try to improve on the epistemics of internet discourse to outsiders who are otherwise relatively uninterested in what we do. I have seen people’s eyes light up when they hear about it.
Thanks for the updates!
Regarding Copy-pasting footnotes from a Google Document, I think it would be nice if after the copy-pasting:
Nested bullet points were not converted to non-nested bullet points.
The text in the cells of the headers of tables were not converted to sections which appear on the left navigation panel.
The footnote links were not broken.