Okay, I clicked on it and it seems all fine to me. (To anyone still wary, I Ben Pace promise my impression clicking through is that it’s legit, and actually quite detailed.)
I am not clicking on a link with the URL malicious.link/start :-P
Ok, that’s quite a lot more helpful than I’d realised—why not make it more prominent though?
You can get it by clicking “view all posts” at the bottom of the recent post list on the frontpage. As you can see on LessWrong (which this site is a clone of) it’s also permanently on the left side of the screen even more prominently. The folks working on this site have slightly different site goals and haven’t included that (yet).
I mainly used the ‘top posts in <various time periods>’ option (typically the 1 or 3 month options, IIRC); median time between visits was probably something like 1-3 months, so that fit pretty well.
Interesting. I realise there’s a class of users who check on that regularity, and want to see the highlights from a couple of months. On LW we have the curated section which does this sort of thing, but the EA Forum doesn’t, so I guess it’d be especially useful here. This does move it up my priorities list quite a bit. Thx.
even on the old forum I strongly wished for a way to filter by subject… my favourite forums for UX were probably the old phpBB style ones, where you’d have forums devoted to arbitrarily many subtopics
My teammate Oli Habryka has strong opinions here, I’ll let him write stuff if he has time. Current plan is to not do this anytime soon.
I agree following users is important.
Often a friend would link me to a post that had already been around for a week or two when I read it.
In general I myself keep cool-looking tabs open for a while, and if I don’t read them and I close them I know that there’s no easy way to get back to them. I agree many sites are more static than this Forum—compare HackerNews to SlateStarCodex, where I can see all the SSC posts from the past few months listed on a screen, whereas with HN I can’t see all the posts from the last hour on the screen. But for the majority of places I’m interested in, if I don’t save the link prominently or recall the title clearly, I won’t lose them, so I’m surprised this is more prominent for you with this Forum.
Here’s an editor guide I just updated.
A year ago I did write a little editor guide, but many parts of it quickly went out of date. I’ll post to the Forum if I update it.
Edit: I updated it.
Gotcha. Not being able to easily copy-in from G-Docs and fotnotes+pictures being lotsa work.
Chatting with the team, their sense is that copy-pasting footnotes is very unlikely to ever work between editors (e.g. I don’t expect footnotes to be copied functionally into MS word, Dropbox paper, or any other editor you might use). If that’s the case, I would like to build the ability to do a direct import from g-docs, which would solve these problems.
Also agree with the images. The big thing we don’t do right now is host images, which means you have to upload them to the internet yourself then put the URL into our editor.
The current state of the plan is to do a big overhaul of the editor framework either this quarter or next, where I expect us to spend time on these issues and others. In general we found that making small edits to the current editor for things like this were too costly in both the short and long run, and we’d also prefer an editor a bit more like google docs in a bunch of ways.
Can you say more about what you find frustrating about using the editor/posting? Am also interested to know if you find it better/worse than the old site.
Thx for the post.
Re: searching for great posts, there is also an archive page where you can order by top and other things in the gear menu.
Can you say more about how you used the old forum? I’m hearing something like “A couple of times per year I’d look at the top-posts list and read new things there”. (I infer a couple of times per year because once you’ve done it once or twice I’d guess you’ve read all the top posts.) I think that’s still very doable using the archive feature.
Am also surprised that you lose posts. My sense is that for a post to leave the frontpage takes a couple of days to a week. Do you keep tabs open that long? Or are you finding the posts somewhere else?
Tah for trying to generally make the Forum a nicer place Michelle. That said I want to say that in this case, for me, I had zero negative experiences reading the post, and the line “The latest version has reached the point where I just don’t see the point of visiting the forum any more” was the most useful part of the post for me. I’ve not heard anyone tell me the new Forum is unusable for them, and I’m interested in further (unfiltered) info from Arepo + others (though I don’t have a lot of time to engage).
(@everyone else, in case it’s not apparent, I’m part of the LW team who created the codebase for the new site)
The obvious paper that is related is Bostrom’s Where Are They? Why I Hope The Search For Extraterrestrial Life Finds Nothing. This argues not that the search itself would be an x-risk, but that finding advanced life in the universe would (via anthropics and the fermi equation) cause us to heavily update that some x-risk was in our near future. Very interesting.
(Relatedly, Nick was interviewed on this paper for the last ~1/3rd of his interview on the Sam Harris podcast.)
I may be misremembering, but I have the cached belief that GiveWell records and publishes something like all meetings including board meetings. If so you could listen to the last board meeting to see how things were at.
A high quality podcast has been made (for free, by the excellent fanbase). It’s at www.hpmorpodcast.com.
I think this comment suggests there’s a wide inferential gap here. Let me see if I can help bridge it a little.
If the goal is to teach Math Olympiad winners important reasoning skills, then I question this goal. They just won the Math Olympiad. If any group of people already had well developed logic and reasoning skills, it would be them. I don’t doubt that they already have a strong grasp of Bayes’ rule.
I feel fairly strongly that this goal is still important. I think that the most valuable resource that the EA/rationality/LTF community has is the ability to think clearly about important questions. Nick Bostrom advises politicians, tech billionaires, and the founders of the leading AI companies, and it’s not because he has the reasoning skills of a typical math olympiad. There are many levels of skill, and Nick Bostrom’s is much higher.
It seems to me that these higher level skills are not easily taught, even to the brightest minds. Notice how society’s massive increase in the number of scientists has failed to produce anything like linearly more deep insights. I have seen this for myself at Oxford University, where many of my fellow students could compute very effectively but could not then go on to use that math in a practical application, or even understand precisely what it was they’d done. The author, Eliezer Yudkowsky, is a renowned explainer of scientific reasoning, and HPMOR is one of his best works for this. See the OP for more models of what HPMOR does especially right here.
In general I think someone’s ability to think clearly, in spite of the incentives around them, is one of the main skills required for improving the world, much more so than whether they have a community affiliation with EA . I don’t think that any of the EA materials you mention helps people gain this skill. But I think for some people, HPMOR does.
I’m focusing here on the claim that the intent of this grant is unfounded. To help communicate my perspective here, when I look over the grants this feels to me like one of the ‘safest bets’. I am interested to know whether this perspective makes the grant’s intent feel more reasonable to anyone reading who initially felt pretty blindsided by it.
 I am not sure exactly how widespread this knowledge is. Let me just say that it’s not Bostrom’s political skills that got him where he is. When the future-head-of-IARPA decided to work at FHI, Bostrom’s main publication was a book on anthropics. I think Bostrom did excellent work on important problems, and this is the primary thing that has drawn people to work with and listen to him.
 Although I think being in these circles changes your incentives, which is another way to get someone to do useful work. Though again I think the first part is more important to get people to do the useful work you’ve not already figured out how to incentivise—I don’t think we’ve figured it all out yet.
Ah yes, agree. I meant coordination, not collusion. Promotion also seems fine.
MIRI helped us know how much to donate and how much of a multiplier it would be, and updated this recommendation as other donors made their moves. I added something like $80 at one point because a MIRI person told me it would have a really cool multiplier, but not if I donated a lot more or a lot less.
I imagined Alex was talking about the grant reports, which are normally built around “case for the grant” and “risks”. Example: https://www.openphilanthropy.org/giving/grants/georgetown-university-center-security-and-emerging-technology
I haven’t yet finished thinking about how the EA Forum Team should go about doing this, given their particular relationship to the site’s members, but here’s a few thoughts.
I think, for a platform to be able to incentivise long-term intellectual progress in a community, it’s important that there are individuals trusted on the platform to promote the best content to a place on the site that is both lasting and clearly more important than other content, like I and others have done on the AI Alignment Forum and LessWrong. Otherwise the site devolves into a news site, with a culture that depends on who turns up that particular month.
I do think the previous incarnation of the EA Forum was much more of a news site, where the most activity occurred when people turned up to debate the latest controversy posted there, and that the majority of posts and discussion on the new Forum are much more interested in discussion of the principles and practice of EA, rather than conflict in the community.
(Note that, while it is not the only or biggest difference, LessWrong and Hacker News both have the same sorting algorithm on their posts list, yet LW has the best content shown above the recent content, and thus is more clearly a site that rewards the best content over the most recent content.)
It’s okay to later build slower and more deliberative processes for figuring out what gets promoted (although you must move much more quickly than the present day academic journal system, and with more feedback between researchers and evaluators). I think the Forum’s monthly prize system is a good way to incentivise good content, but it crucially doesn’t ensure that the rewarded content will continue to be read by newcomers 5 years after it was written. (Added: And similarlycurrent new EAs on the Forum are not reading the best EA content of the past 10 years, just the most recent content.)
I agree it’s good for members of the community to be able to curate content themselves. Right now anyone can build a sequence on LessWrong, then the LW team moves some of them up into a curated section which later get highlighted on the front page (see the library page, which will become more prominent on the site after our new frontpage rework). I can imagine this being an automatic process based on voting, but I have an intuition that it’s good for humans to be in the loop. One reason is that when humans make decisions, you can ask why, but when 50 people vote, it’s hard to interrogate that system as to the reason behind its decision, and improve its reasoning the next time.
(Thanks for your comment Brian, and please don’t feel any obligation to respond. I just noticed that I didn’t intuitively agree with the thrust of your suggestion, and wanted to offer some models pointing in a different direction.)
I did spend a day or two collating some potential curated sequences for the forum.
I still have a complete chronological list of all public posts between Eliezer and Holden (&friends) on the subject of Friendly AI, which I should publish at some point
I spent a while reading through people’s work like Nick Bostrom and Brian Tomasik (I didn’t realise how much amazing stuff Tomasik had written)
I found a bunch of old EA blogs by people like Paul Christiano, Carl Shulman, and Sam Bankman-Fried that would be good to collate the best pieces from
I constructed a mini versions of things like the sequences, the codex, and Owen Cotton-Barratt’s excellent intro to EA (prospecting for gold) as ideas for curated sequences on the Forum.
I think it would be good from a long-term community norms standpoint to know that great writing will be curated and read widely.
Alas, CEA did not seem to have the time to work through any sequences (seemed like there was a lot of worries about what signals the sequences would send, and working through the worries was very slow going). At some point if this ever gets going again, it would be good to have a discussion pointing to any good old posts that should be included.
+1, a friend of mine thought it was an official statement from CEA when he saw the headline, was thoroughly surprised and confused